Get the Standard

The TRUST-AI Open™ standard is the Responsible AI Standard that assesses both organisational behaviours and system design. It’s open, actionable, and ready to implement.

Designed for real-world AI teams, TRUST-AI Open™ includes five core domains: Alignment, Data, Implementation, Impact, and Adaptation. It bridges the gap between ethical intent and operational practice, helping teams move from principles to measurable progress.

The TRUST-AI Open™ Standard will be released publicly in July 2025. Sign up to be notified when it becomes available for purchase.

What makes TRUST-AI Open™ different?

  • Dual-layered: Addresses both organisational culture and AI system design

  • Behavioural focus: Goes beyond compliance to assess ethical maturity

  • Built for teams: Includes readiness prompts, narrative scoring and facilitation cues

  • Verification-ready: Forms the foundation for our tiered certification model

The Five Key Elements of TRUST-AI Open™

1. Purpose Alignment

Why are we building this—and for whom?

  • Clear articulation of intent and value

  • Alignment with human needs, rights, and context

  • Avoidance of unintended or unethical outcomes

“Without purpose, AI becomes an accelerant of whatever system it's in, good or bad.”

3. Transparency & Accountability

Can we see it, question it, and improve it?

  • Explainability of models and decisions

  • Traceability of data and system logic

  • Human accountability for automated outcomes

“Black box = red flag. If no one is answerable, no one is responsible.”

2. Fairness & Inclusion

Who benefits—and who bears the cost?

  • Bias mitigation in data, design, and deployment

  • Inclusive access across groups and geographies

  • Avoidance of harm to vulnerable populations

“If it only works for some, it doesn’t work responsibly”

4. Safety & Security

Does it protect people from harm?

  • Robustness to misuse, manipulation, or failure

  • Data integrity and cyber resilience

  • Protocols for escalation, override, and redress

“Trust dies the moment something unsafe slips through because the model said so.”

5. Human Agency & Oversight

Do humans stay in the loop—and in charge?

  • Oversight that empowers people, not bypasses them

  • Feedback loops, contestability, and continuous improvement

  • Cultural readiness and capability to work with AI, not under it

“Responsible AI isn’t about rules—it’s about relationships.”

TRUST-AI Open™ Comparison

While frameworks like NIST, OECD, and UNESCO define essential AI principles, the TRUST-AI Open™ Standard goes further—operationalising them with practical tools, behavioural scoring, and tiered certification built for real teams.

Pillar / Focus Area NIST AI RMF OECD Principles UNESCO Recommendation TRUST-AI Open™
Implementation Readiness Abstract controls and mapping tools Policy-focused principles Governmental guidance and values Includes elements from all other standards, plus ready-to-use tools, prompts, and certification pathways
Purpose Alignment Context mapping, governance roles Well-being, inclusive growth Accountability, awareness Includes elements from all other standards, plus operationalises purpose in daily work; prevents drift
Fairness & Inclusion Bias checks, fairness indicators Non-discrimination, inclusivity Collaboration, equity Includes elements from all other standards, plus participatory design and population-specific bias auditing
Transparency & Accountability Explainability, traceability Transparency, accountability Public understanding, redress Includes elements from all other standards, plus explainability for users, logging, safe internal challenge
Safety & Redress Risk scanning, recovery planning Security, override options Oversight, resilience Includes elements from all other standards, plus redress tools, human override, lifecycle design
Human Agency & Oversight Governance maturity models Human empowerment, cooperation Participation, governance feedback Includes elements from all other standards, plus behavioural scoring and evolving governance built in