Copilot to become the new UI of AI
With organizations today using 200-300 applications, what will be the impact on users when a profusion of AI solutions are added to the pile? As vendor AI offerings continue to expand, imagine the confusion. Employees required to access multiple agents with multiple UIs, stored sporadically across their organization: an agent for HR relations, sales, service, and for vendor applications, the likes of Workday, SAP, Salesforce. A labyrinth of applications. Now imagine this simplified. Imagine all the isolated agents and their data integrated into a single UI, a single place of reference, giving clarity, accessibility and enabling high adoption. As announced by Satya Nadella, Copilot is positioning to become the new UI for AI.
Bot vs human: which will reign in consumer engagement?
In the dawn of ‘agentic’ AI, that is to say, autonomous bots capable of mimicking humans and independent decision-making, what will be the implications for our every lives? Perhaps an end to dreaded call centre dispute resolutions, instead replaced by bots tackling negotiations perfectly due to having instant access to undisputable contracts and policies, outmatching human agents. For e-commerce, AI assistants capable of re-ordering groceries online, exploiting the best discounts, fastest delivery, and lowest shipping costs, totally disrupting traditional e-commerce loyalty. Future AI has the potential to make daily life incredibly efficient and transform consumer engagement models in ways not yet fully realised.
The skeptical approach to security and AI
Staggeringly, if cybercrime were a country, it would have the 3rd largest GDP. With attacks happening every second, it’s never been more important to approach data security and AI with a zero-trust mindset: practicing insider risk-management, auto-classifying data with Purview, and “red teaming” AI outputs. This critical thinking should apply to future advancements also, as we predict a shift towards observability whereby AI handles tasks and humans merely monitor them. Plus, as AI begins to mimic personas and styles, the risk of deep fakes increases, unbeknownst to users unless questioned. Staggeringly, if cybercrime were a country, it would have the 3rd largest GDP. With attacks happening every second, it’s never been more important to approach data security and AI with a zero-trust mindset: practicing insider risk-management, auto-classifying data with Purview, and “red teaming” AI outputs. This critical thinking should apply to future advancements also, as we predict a shift towards observability whereby AI handles tasks and humans merely monitor them. Plus, as AI begins to mimic personas and styles, the risk of deep fakes increases, unbeknownst to users unless questioned.
Embracing responsible AI with chaos engineering and governance
As AI systems become more integrated into our daily lives, it’s never been more critical to ensure they operate ethically. There are significant risks if not governed properly through informed practices, making responsible app development not just a necessity, but a cornerstone for building trustworthy AI systems that adhere to ethical standards and regulatory requirements. When an organization upholds this commitment, it not only mitigates potential harms but also fosters trust among users and stakeholders, thereby establishing the foundations for long-term success.