top of page

AltaScient Responsible AI Policy

Language AI and language generation has become a ubiquitous tool for work in many places and fields. For this reason, AltaScient puts in place the following policy concerning responsible use of AI in the context of work for AltaScient.

 

Permitted Use of Third-Party Generative AI Models

 

For work for AltaScient, usage of third-party generative language AI models such as Large Language Models (LLMs) (e.g., ChatGPT, Gemini, DeepSeek) is only permitted in cases where no internal or proprietary information, data, programming code, text, or intellectual property is entered. Examples of permitted responsible use of third-party LLMs may include summarization of published research articles, searching documents, or obtaining suggestions for topics that do not include any of the above cases.

 

Such information can be used with such models only allowed with models included in services that are part of a subscription obtained by AltaScient or with explicit permission of designated authority. Currently, this is true for tools including Google Colab, Llama, GPT, Tulu, Claude, Gemini, and Microsoft Copilot. Additional tools may be added or restricted based on ongoing evaluations of security, licensing, and ethical considerations.

 

Restrictions on Generative AI for Content Creation

 

For work on text production for any materials created by AltaScient that are intended for publication or submission to a third party, use of generative language models is expressly not permitted unless explicitly authorized. This applies to:

  • Research papers, proposals, reports, white papers, and official communications.

  • Marketing materials, business development content, and client deliverables.
     

 If a generative AI tool is used in a project, employees, contractors and collaborators must:

  • Clearly document and disclose its usage.

  • Cite the AI tool and detail its role in the project, including the extent of AI-generated content.

  • Verify the accuracy of all AI-generated content and references, as generative models may fabricate citations.


​

Governance, Accountability, and Risk Management

To align with global AI standards such as ISO/IEC 42001 and the NIST AI Risk Management Framework, AltaScient has established the following responsible AI governance principles:

  • Governance & Oversight - AI applications are continuously monitored for security, and ethical integrity.

  • Data Management - Data management practices ensure that AI tools uphold privacy, fairness, and transparency.

  • Risk Management - All AI models are evaluated for potential risks such as bias, misinformation, and unintended consequences.

  • Lifecycle Management - AI tools undergo regular assessments for security, and reliability throughout their lifecycle.

  • Procurement & Documentation - Procurement of AI models is based on strict criteria for ethics, security, and performance, with thorough documentation of their use.

​

Ensuring Accuracy, Attribution, and Ethical Use

​​

Main Aspects that we are trying to accomplish: Documentation, correct attribution or responsible citation, ensuring accuracy of content and references/sources. 

 

AltaScient follows best practices for responsible AI, ensuring that all AI-driven insights, recommendations, and automation enhance decision-making without misleading stakeholders. Our goal is to balance innovation with accountability, helping clients leverage AI safely and effectively while mitigating risks.

​

Copyright © 2025 AltaScient LLC. All rights reserved.

bottom of page