hero-headerhero-header
hero-header
hero-headerhero-header

What is the Common AI Assessment?

What is the Common AI Assessment?

Risk-minded organizations are in a tough spot. While AI has tremendous potential, it presents sizable security, privacy, legal, and ethical risks. Yet AI adoption is accelerating, with 77% of SaaS companies building or launching AI features in 2023. (source)

How do you know if rapidly evolving vendor AI systems are safe and trustworthy? For organizations with hundreds or thousands of vendors, it's hard to collect data fast enough to keep up.

About

The vast majority of vendor AI risk assessments start with the same questions.

The Common AI Risk Assessment consolidates the top questions that risk experts are asking about vendor AI systems. It reflects market feedback on transparency and disclosure requirements for trustworthy AI.

By making answers to these key questions readily available, risk leaders can quickly understand the risk posture of vendor AI systems and identify systems that warrant deeper due diligence.

A starting point - not a standard.

These questions are already being asked today. Customer-side Vendor Risk teams and vendor-side Customer Trust & Assurance teams are both fielding a significant increase in the volume of AI due diligence questions. That volume is expected to increase dramatically into 2025 and beyond.

The goal of the Common AI Risk Assessment is to make it easier for organizations to exchange this information. It will evolve with market expectations to help vendors and customers stay aligned as AI risks and regulation emerge.

How do the questions get answered?

Gen AI Trust Network populates answers to the Common AI Risk Assessment through a combination of public disclosure data and vendor self-attestations.

Gen AI Trust Network continuously scans public domains to answer as many questions as possible based on what vendors already share in binding disclosures. However, our research shows that fewer than 20% of vendors with AI disclose it in their Terms of Service.

Vendors can take a proactive stance on responsible AI disclosure through three simple steps:

  1. Completing a self-attestation
  2. Hosting this artifact in their Trust Portal
  3. Authorizing Gen AI Trust Network to access this artifact

Vendor AI Transparency values indicate completeness of disclosure against the Assessment questions - not the risk of the underlying disclosure itself.

Answers provided through self-attestation are made available to customers at the vendor’s authorization.

Are you a vendor?

If you are a vendor and want to be proactive on responsible AI disclosure, please reach out! No matter whether or not you have implemented AI systems, your customers need this information for their external AI risk inventory for compliance.

Contact us

Common AI Assessment Questions

Vendors providing AI disclosures in contractual terms offer a higher degree of transparency and protection to customers. Claims made by vendors in marketing materials and blogs should be internally consistent with their contractual obligations.

Vendors may process customer data for a variety of AI use cases. Generative AI use cases such as large language models (LLMs) pose greater risk of disclosing training data and producing potentially sensitive outputs. Other machine learning models like linear and logistic regression pose less risk of sensitive data exposure. By specifying the types of AI used, vendors can help customers make better risk-based decisions.

Training data refers to any information used to develop AI capabilities. Vendors using customer data for this purpose may expose their customers to risks of unintended training and sensitive data leakage.

Vendors are increasingly engaging third party foundational model providers like OpenAI, Anthropic, Microsoft and others as sub-processors for AI systems. To share customer data with these third parties, vendors must obtain permission from the data controller and establish a Data Processing Agreement.

Customer data such as prompts or outputs may be retained by vendors longer than necessary to deliver the immedaite customer beneift. This data could also be shared with downstream AI subprocessors engaged by the vendor, all of which may have varying data retention policies. The longer data is retained, the higher likelihood it could be accessed for inappropriate or malicious purposes.

Article 52 of the EU AI Act requires that AI systems providers inform users they are interacting with AI. Providing users with notice helps them make informed decisions about engaging with the system.

If vendors limit options to manage consent to AI features, customers have less control over their exposure to the risks of the AI system. Increasingly, vendors are offering individual- or account-level controls to manage access to AI features.

AI systems require diverse controls for effective governance and compliance. Vendors who are prepared to furnish documentation describing these controls may offer greater insight and assurance to customers looking to understand how their data privacy and security are protected by the vendor.

An internal AI Acceptable Use Policy defines requirements and best practices for employees using AI tools, the use of which may pose risks to customer data. Vendors with a transparent AI AUP can help customers understand AI governance maturity, accepted AI practices, and risk tolerance for AI usage within the vendor organization.