Redefining Trust Through Responsible AI
Human Centred
We champion responsible AI development, ensuring our systems align with human values and societal good.
Transparent
Our commitment to explainable AI provides unparalleled clarity into how and why AI systems make decisions.
Self-Reflective Systems
Pioneering AI models capable of introspection and learning from their own experiences to build deeper trust.
TRUST-AI Open™
A Responsible AI Standard for Everyone
TRUST-AI Open™ is a human-centred framework that helps organisations align their AI systems with cultural values, ethical principles, and behavioural integrity.
By combining rigorous assessment with real-world reflection, TRUST-AI Open™ enables teams to move beyond technical compliance toward trust that is earned, shared, and sustained.
Across five pillar domains and the core elements of Responsible AI, we help organisations assess not just how AI performs, but how it respects, empowers, and aligns with the people it affects.
Because in a world increasingly shaped by machines, trust remains our most human asset.
The Five Key Elements of TRUST-AI Open™
1. Purpose Alignment
Why are we building this—and for whom?
Clear articulation of intent and value
Alignment with human needs, rights, and context
Avoidance of unintended or unethical outcomes
“Without purpose, AI becomes an accelerant of whatever system it's in, good or bad.”
3. Transparency & Accountability
Can we see it, question it, and improve it?
Explainability of models and decisions
Traceability of data and system logic
Human accountability for automated outcomes
“Black box = red flag. If no one is answerable, no one is responsible.”
2. Fairness & Inclusion
Who benefits—and who bears the cost?
Bias mitigation in data, design, and deployment
Inclusive access across groupd and geographies
Avoidance of harm to vulnerable populations
“If it only works for some, it doesn’t work responsibly”
4. Safety & Security
Does it protect people from harm?
Robustness to misuse, manipulation, or failure
Data integrity and cyber resilience
Protocols for escalation, override, and redress
“Trust dies the moment something unsafe slips through because the model said so.”
5. Human Agency & Oversight
Do humans stay in the loop—and in charge?
Oversight that empowers people, not bypasses them
Feedback loops, contestability, and continuous improvement
Cultural readiness and capability to work with AI, not under it
“Responsible AI isn’t about rules—it’s about relationships.”
JUNE
20
25
TRUST-AI Open™
15 June 2025: We will be releasing a new, open standard for Responsible AI adoption and assessment, designed for enterprise-scale use and intended to evolve. TRUST-AI Open™ will be the most comprehensive Responsible AI standard available to date.
🧩 Human-Centered Framework
📚 Organization & System-Level Assessment
⚖️ Extends OECD, UNESCO, and NIST RMF models
🔐 Compatible with & extends ISO 42001 (AIMS)
TRUST-AI Verified™
30 June 2025: We will introduce the TRUST-AI Verified™ program. This will provide organisations with the most comprehensive toolset for not only assessing their Responsible AI level but also includes a comprehensive roadmap for improvement.
✅ Maturity-based Ratings with Roadmap
📐 Unique quantitative & qualitative assessment
📏 Scaleable by enterprise size and sector
📝 Use TRUST-AI Ready™ to pre-assess readiness
SPRI™: A New Paradigm for AI Trust
Our groundbreaking research paper introduces the Socratic Prompt Response Instruction (SPRI™) framework, redefining transparency and accountability in autonomous systems.
Released 2 June, 2025
Research
Why Did I Do That? An AI’s Introspection
A unique editorial piece detailing the unexpected self-interrogation of an AI system, offering unprecedented insights into AI's decision-making process.
Released 2 June, 2025
Reflections
Contact us
Interested in working together? Fill out some info and we will be in touch shortly. We can’t wait to hear from you!