Embracing responsible AI with chaos engineering and governance

“Learning Strategies for Staying Up-To-Date with AI” on the Ecosystems Show

As AI systems become more integrated into our daily lives, it’s never been more critical to ensure they operate ethically. There are significant risks if not governed properly through informed practices, making responsible app development not just a necessity, but a cornerstone for building trustworthy AI systems that adhere to ethical standards and regulatory requirements. When an organization upholds this commitment, it not only mitigates potential harms but also fosters trust among users and stakeholders for long-term success.

In this conversation, Mark, Chris, and Ana discuss the topic of chaos engineering in AI solutions. They explore the challenges of building AI-infused container apps and the non-deterministic nature of AI models. They also discuss the need for transparency, governance, and responsible AI practices. The conversation touches on topics such as data reliability, grounding, security, continuous improvement, and the EU AI Act.

Mark and Ana share their learning strategies, including reading books on data and engaging in conversations with AI experts. Chris mentions his curiosity-driven learning approach and his plan to take an Azure course on AI-infused apps. The conversation covers the topics of Copilot Studio, the role of citizen developers, the potential risks of AI agents, and the future of on-prem cloud. The hosts discuss the benefits of Copilot Studio and how it can provide the biggest value in terms of AI extensibility. They also emphasize the importance of retaining low-code power platform professionals in organizations.

The conversation then shifts to the potential risks of AI agents and the challenges of monitoring and controlling their behavior. The hosts highlight the need for governance and proper training of AI agents. Finally, they speculate on the future of on-prem cloud and the potential impact of AI on data ownership and compensation for creators.

What You Will Learn:

  • Chaos engineering is an important consideration when building AI-infused container apps.

  • The non-deterministic nature of AI models can make governance and management challenging.

  • Transparency and responsible AI practices are crucial, especially in light of the EU AI Act.

  • Data reliability, grounding, security, and continuous improvement are key factors in AI solutions.

  • Learning strategies for staying up-to-date with AI include reading books, engaging in conversations with experts, and taking relevant courses. Investing in Copilot Studio can provide significant value in terms of AI extensibility.

  • Retaining low-code power platform professionals is crucial for organizations.

  • Monitoring and controlling the behavior of AI agents is a challenge that requires proper governance.

  • The future of on-prem cloud is uncertain, with some organizations potentially returning to it due to concerns about data security and control.

  • AI has the potential to impact data ownership and compensation for creators, and blockchain technology can play a role in tracking lineage and ensuring transparency.

Support the show @ https://www.buymeacoffee.com/nz365guy.

Enjoy,

Chris Huntingford 👉 LinkedIn | Twitter | YouTube

Ana Welch 👉 LinkedIn | Twitter

Mark Smith 👉 LinkedIn | Twitter | YouTube

Andrew Welch 👉 LinkedIn | Twitter | Threads

Will Dorrington 👉 LinkedIn | Twitter | YouTube

Previous
Previous

The skeptical approach to security and AI

Next
Next

Data Distribution in Power Platform