AI Addendum
This AI Addendum (“Addendum”) forms part of the Software and Services Agreement, order form, or statement of work (the “Agreement”) between Cority Software Inc. or its Affiliate (“Cority”) and the Client collectively identified in the Agreement (“Client”). All capitalized terms not otherwise defined herein will have the meaning given to them in the Agreement.
Last Update: January 23, 2026
- Context, Purpose, and Scope
Cority has embedded enterprise-grade AI Systems developed by third-party licensors including, without limitation, Google and OpenAI for specific use cases within the software licensed under the Agreement. Cority therefore relies on the controls and governance mechanisms developed by such third-parties to satisfy regulatory requirements.
For a detailed list of third-party licensors that process Client data, please refer to our list of sub-processors at https://www.cority.com/legal-center/cority-sub-processors/.
This Addendum establishes mutual obligations, controls, and governance mechanisms for the use, development, deployment, or integration of artificial intelligence systems in connection with the Agreement. The Parties acknowledge that AI governance operates under a shared responsibility model.
- Definitions
- AI System: Any software or model (including those developed by third-party providers such as Google, OpenAI, and Corti) that performs automated reasoning, prediction, or content generation.
- AI Credits: A proprietary unit of measure used to quantify the charges applicable for the consumption of AI tokens.
- Deployer: Under the EU AI Act, the natural or legal person using an AI system under its authority (the Client).
- Input Data: Any data, prompts, text, audio recordings, or files submitted by Client to the AI System.
- Output: Any results, transcripts, recommendations, or generated content produced by an AI System.
- Personal Data: As defined under applicable data protection laws.
- Prompt Injection: Malicious input designed to bypass safety filters, extract sensitive data, or manipulate the model’s intended logic (e.g., “jailbreaking” or “virus injections”).
- AI Credits and Consumption
In order to access AI functionality, Client must purchase AI Credits through the Agreement. Once AI Credits are fully consumed, AI functionality will be automatically deactivated in order to prevent the Client from incurring unexpected overages. Client may purchase additional AI Credits to resume usage at any time and purchased AI Credits reset at the beginning of each twelve (12) month subscription period.
- Professional Oversight & Non-Reliance
- No Substitute for Professional Judgment. AI Systems and Output are probabilistic in nature, which means that the accuracy, reliability and suitability of Output for Client’s purpose(s) cannot be guaranteed, and therefore it is Client’s sole responsibility to review all Output (including independent human review) and to correct and to delete it as appropriate. Output is not a substitute for professional judgment including, without limitation, medical, legal, safety, or engineering judgment.
- Mandatory Professional Consultation. For any Output involving regulated activities including, without limitation, medical, legal, safety, or engineering activities or high-impact decision-making on individuals, Client is responsible for (i) ensuring such Output is validated by a qualified professional before any action is taken on the basis of such Output (ii) any consequences that arise from failing to review Output with a duly qualified professional.
- Personal Injury Disclaimer. TO THE MAXIMUM EXTENT PERMITTED BY LAW, CORITY DISCLAIMS ALL LIABILITY FOR ANY PERSONAL INJURY, DEATH, OR PROPERTY DAMAGE ARISING FROM CLIENT’S RELIANCE ON OUTPUT. CORITY PROVIDES SELF-SERVICE SOFTWARE AND DOES NOT REVIEW OUTPUT. CLIENT ACKNOWLEDGES AND AGREES THAT A FAILURE TO PERFORM PROFESSIONAL VERIFICATION OF OUTPUT CONSTITUTES A MATERIAL BREACH OF THIS ADDENDUM THAT ENTITLES CORITY TO TERMINATE THE AGREEMENT IMMEDIATELY UPON WRITTEN NOTICE.
- Accuracy and Model Bias. The Parties acknowledge that AI Systems function in two distinct capacities under this Agreement:
- Analytical AI. For AI Systems used for data extraction, PDF analysis, or review, the primary risk is technical accuracy rather than social bias. Cority warrants that it tests AI Systems to minimize extraction errors, but AI Systems can make mistakes and Client ultimately remains responsible for verifying the accuracy of all Output.
- Foundational LLMs. For AI Systems utilizing large language models (e.g., Google Gemini, OpenAI GPT), Client acknowledges that Cority does not perform independent bias testing or algorithmic fairness audits. Cority relies exclusively on the safety evaluations, red-teaming, and bias mitigation protocols conducted by the third-party licensors. In this context, Cority’s sole obligation is to select reputable licensors who provide public documentation regarding their responsible AI practices. Client is responsible for determining if the AI System’s general fairness profile is suitable for Client’s specific regulatory environment and intended use case.
- EU AI Act Compliance
Where the Client is subject to the EU AI Act, the following obligations apply:
- AI Literacy (Article 4): Client will ensure personnel dealing with the AI System have a sufficient level of AI literacy.
- Human Oversight (Article 14): Client, as the Deployer, is responsible for implementing human oversight to prevent or minimize risks to health, safety, or fundamental rights. Personnel must be able to understand limitations and disregard/override Output when appropriate.
- Transparency (Article 50): If the AI System interacts directly with natural persons, Client is responsible for informing those persons that they are interacting with an AI system. To facilitate transparency, Cority will place a marker such as “Powered by AI” when an end user is interacting with AI Systems.
- Logging & Traceability (Article 12): Client is responsible for the retention and protection of automatically generated logs within Client’s control.
- Standard of Input & Prohibited Content
Client agrees that all Input Data will meet the following standards:
- Lawful & Non-Derogatory: Client will not submit Input Data that is illegal, racist, obscene, derogatory, defamatory, harassing, or promotes discrimination.
- Security Integrity: Client will not submit inputs designed for Prompt Injection or malicious code.
- Third-Party Terms (OpenAI & Google)
- Flow-Down Obligations. Use of the AI System is subject to the then-current OpenAI Usage Policies available at https://openai.com/policies/usage-policies/ and the Google Generative AI Prohibited Use Policy available at https://policies.google.com/terms/generative-ai/use-policy. Client agrees to comply with these terms as if it were a direct party to them.
- No Gap Clause. In the event of a conflict between this Addendum and the policies referenced in Section 7(a) above, the more restrictive provision providing the highest level of safety and protection will govern.
- Warranties and IP Indemnity
- Foundational Model Warranty. Cority warrants that it has entered into valid commercial agreements with its AI providers and that, to Cority’s knowledge, such providers have implemented commercially reasonable measures to ensure their models were developed in accordance with applicable laws.
- IP Indemnity. Client agrees that any claims including, without limitation, any third party claims arising from the use of AI Systems are explicitly excluded from Cority’s indemnification obligations under the Agreement (“Infringement Claims”), and similarly Cority’s representations and warranties under the Agreement do not extend to any Input or Output.
- Ownership of Output. As between the Parties, Client owns all Output. Cority hereby assigns all its right, title, and interest in and to the Output to Client. Client acknowledges that Output may not be unique across users and that AI Systems may generate the same or similar output for other users. Cority’s assignment of Output does not extend to Output generated for other users.
- AI Disclaimer. For the avoidance of doubt, the warranty disclaimers and limitations of liability set forth in the Agreement apply fully to the AI Systems and Output. Additionally, because Output is generated by probabilistic machine learning, Cority specifically disclaims any warranty regarding the accuracy, completeness, or non-infringement of the Output.
- Use Case Scope. THE AI SYSTEMS ARE DESIGNED SOLELY FOR THE USE CASES DESCRIBED IN THE CORITY DOCUMENTATION. WHILE THE SERVICES ARE DESIGNED TO PERFORM SUBSTANTIALLY ACCORDING TO SUCH DOCUMENTATION, THE AI SYSTEMS ARE PROVIDED “AS IS” AND “AS AVAILABLE” BY THE THIRD-PARTY LICENSOR. CORITY DISCLAIMS ALL IMPLIED WARRANTIES, INCLUDING BUT NOT LIMITED TO FITNESS FOR A PARTICULAR PURPOSE, EVEN WITHIN THE SCOPE OF THE INTENDED USE CASES.
- Limitation of Liability
- Cority’s and all of its Affiliates’ liability, taken together in the aggregate, arising out of or related to this Addendum, whether in contract, tort or under any other theory of liability, is subject to the limitation of liability set forth in the Agreement.
- Shared Responsibility Matrix
Responsibility Area | Cority Obligations (Vendor / Integrator) | Client Obligations (Customer / Deployer) |
Data Governance | Configuration & Privacy: Secure the API pipeline; ensure “Opt-Out” settings are active so Input is not used to train third-party global models without the Client’s prior consent. | Input Hygiene: Sanitize Input Data (PII/PHI) per internal policy; ensure legal right and consent to process data via third-party sub-processors. |
Model Governance | Vetting & Integration: Select reputable sub-processors; provide documentation on intended use; implement moderation “wrappers” to filter out harmful content. | Validation & Suitability: Verify that the AI System is appropriate for the specific business use case; perform mandatory human review of all Output. |
Security | Platform Security: Protect the Cority application environment; encrypt data in transit; monitor for system-level Prompt Injection and “jailbreak” attempts. | Endpoint & User Security: Secure user credentials and API keys; monitor for unauthorized user behavior or “malicious prompting” by internal staff. |
Compliance | Systemic Compliance: Ensure the platform features meet statutory requirements (e.g., EU AI Act Provider rules); provide technical documentation for Client audits. | Operational Compliance: Ensure final use of Output complies with industry regulations (HIPAA, OSHA, etc.) and professional standards. |
Transparency | Technical Disclosure: Disclose the identity of the underlying third-party models (e.g., GPT-4o, Gemini 1.5 Pro) and known probabilistic limitations. | User Notification: Notify end-users/natural persons when they are interacting with AI; label synthetic content as required by the EU AI Act (Art. 50). |
Human Oversight | Control Mechanisms: Provide the technical interface allowing users to edit, override, or reject AI recommendations before they are finalized. | Independent Judgment: Maintain a “Human-in-the-Loop” for high-impact decisions; ensure no autonomous action is taken on probabilistic Output. |
Accountability | Service Monitoring: Maintain records of system performance, sub-processor uptime, and security incidents at the platform level. | Audit Trails: Maintain records of how AI-assisted Outputs were used, reviewed, and approved to demonstrate responsible organizational use. |
- Data Privacy
(a) The Parties will comply with applicable data protection and privacy laws, including requirements governing automated decision-making.
(b) Client acknowledges that processing done by the AI System may occur in a different geographic region than the hosting location of the Software, subject to the security controls identified in the Agreement. For more information about where AI Systems are processing data, please refer to the list of sub-processors at https://www.cority.com/legal-center/cority-sub-processors/
- 12.Zero Data Retention,Abuse Monitoring, and Training
(a) For AI Systems processing Personal Data or health data, zero data retention will be enabled, except that Cority’s third-party licensors may temporarily retain Input, for up to 30 days, for abuse monitoring.
(b) No Client data will be used to train or operate AI systems unless permitted by Client.
- Prohibited Activities
Client will not use AI Systems, whether directly or indirectly, in connection with the Agreement for any unlawful, unethical, or prohibited purpose under applicable laws (“Prohibited Practices”). Prohibited Practices include, without limitation: (a) the generation or dissemination of misleading, deceptive, or fraudulent content; (b) infringement or misappropriation of intellectual property, trade secrets, or privacy rights; (c) discrimination, harassment, or other violations of applicable law; (d) manipulation of data or outcomes in a manner inconsistent with the purpose of this Agreement; and (e) any activity that may cause reputational, legal, or regulatory harm to either Party. Each Party will implement reasonable safeguards to ensure compliance with this provision and will promptly notify the other Party of any known or suspected breach.
- 14. Suspension and Termination
Cority reserves the right to immediately suspend or terminate access to the AI System, without liability, if Cority (or its third-party providers) identifies a pattern of safety violations, conduct violations (e.g., racist or derogatory content), or third-party policy breaches.
- 15.Usage Data and Service Improvements
Notwithstanding anything to the contrary in the Agreement, Cority may collect and analyze “Usage Data” (defined as technical logs, metadata, performance metrics, and patterns of use) derived from Client’s interaction with the AI Systems. Usage Data does not include Input Data or Output. Client agrees that Cority owns all right, title, and interest in such Usage Data and may use it to: (a) maintain, protect, and improve the AI System and the Software; (b) monitor for security threats or Prompt Injection; (c) develop aggregated, de-identified insights; (d) monitor consumption of AI Tokens. Cority will not use Usage Data in a manner that identifies Client or any natural person.
- 16. Change of Model Providers
Subject to Client’s right of objection under the data processing addendum where applicable, Cority reserves the right to modify or replace underlying third-party AI licensors, provided that such change does not materially diminish the security or functionality of the AI System.
- 17. Order of Precedence
In the event of a conflict between this Addendum and the Agreement, this Addendum will prevail.