Shaping the Future of Trustworthy AI
The AI Trust Forum is special. It brings together brilliant thinkers, innovators, policymakers, and practitioners to learn, build relationships, and work together personally solving AI engineering, business, legal and ethical challenges at retreats held in cities around the world. That’s why we call it a “Forum”, not a “Conference”.
21st April 2026
Washington, DC
June 2026
New York
June 2026
Geneva
July 2026
Brasov
Coming 2026
Can your colleagues and customers trust AI? It's your job to show them how.
Executable outputs, not theory, having real influence in shaping the global standard.
Connect, learn, and spend quality time with brilliant people from around the world.
Spend time exploring cities at the pulse of AI innovation and policymaking.
Return home with knowledge, resources, and a network of AI leaders to back you up.
Who should attend the AI Trust Forum?
Business Leaders
CEOs, COOs, CFOs, CPOs, VPs, and Directors seeking confidence and credibility in your AI journey. Develop your strategic clarity, and investment prioritisation.
Technology Leaders
CIOs, CTOs, CAIOs, CDOs, Enterprise Architects, heads of engineering, and those leading AI’s tech and cultural transformation of your organisation.
Governance Leaders
Chief Risk Officers, in-house counsel, ethics teams, heads of responsible AI mitigating risk, ensuring compliance, and protecting the organisation.
Policy Leaders
Regulators, policy makers and their staffs, policy advisors, and leaders of non-governmental organisations shaping policy based on real-world insights from the Forum.
Legal Leaders
Lawyers focused on AI and technology regulatory issues, ethical leaders, and regulatory partners who are navigating AI’s choppy regulatory seas.
Unique, highly collaborative sessions
Fellows and attendees working together on engineering, business and adoption, legal and ethical challenges…
The industry imperative and opportunity
Maturity remains low worldwide. Despite rising awareness, most organizations across regions and sectors remain stuck in the early stages of responsible AI practice.
-
66% of people use AI regularly, and 83% believe the use of AI will result in a wide range of benefits.
-
Yet, trust remains a critical challenge: only 46% of people globally are willing to trust AI systems.
-
Many rely on AI output without evaluating accuracy (66%) and are making mistakes in their work due to AI (56%).
-
There is a public mandate for national and international AI regulation with 70% believing regulation is needed.
Keynotes
Visionary sessions from global leaders framing the most pressing engineering, business, legal, and ethic issues in AI to spark strategic insight and set the agenda.
Town Halls
Expert-led open discussion where attendees interrogate pressing topics, exchange diverse perspectives, and brainstorm possible courses of action.
Open Working Sessions
Small group collaboration tackling specific challenges in AI alongside leaders in the space. Participants ideate, whiteboard, and contribute to publicly shareable work.
Symposium Sessions
Presentation showcasing original research, innovative practice, or notable case studies that attendees can use in their work building Trustworthy AI back home.
Who you’ll work and learn with…
-

Mike Bugembe
Chief Digital Officer
International Org for Migration -

Avishan Bodjnoud
Chief, Information Management
UN Political + Peacebuilding Affairs -

Andrea Caporale
AI Strategy and Design Leader
Microsoft Elevate -

Chris Huntingford
Chief AI Officer
Center for Trustworthy AI -

Ollie Irwin
Head of T&S Global Engagement EMEA
Google -

Alex Malureanu
Co-Founder and CMO
Ascendia -

Dona Sarkar
Director, Enterprise AI Advocacy
Microsoft -

Jason Slater
Chief, AI Innovation + Digital Economy
UN Industrial Development Org -

Ioana Tanase
Director, Accessibility + Inclusive AI
Microsoft -

Ana Welch
Chief Technology Officer
Center for Trustworthy AI -

Andrew Welch
Executive Director
Center for Trustworthy AI
