Technology considerations when scaling AI across an organization

Pubs, smooth orange juice, a preference for “replace” over “rebuild”, and… honestly, who is “Banksy”? Those ice breaker questions out of the way, Michael and I turned their attention to the technology side of scaling AI in the final chapter of our five-part podcast miniseries on AI strategy. You can find part four, the “Human side of Scaling AI” here.

Organizations must tune their technical capabilities to support the scaling of AI. Some of this will be directly relevant to AI itself, for example, employing Machine Learning Operations (MLOps) to build, deploy, monitor, and maintain production models, incorporating rapid advances in the tech to your enterprise machine learning platform where you’ll land your workloads to operate them in your daily business.

Think about the way most IT evolved over the course of several decades. For much of this history, organizations acquired software by ginning up a specific need or “use case,” which was often followed by a basket of requirements pertaining to that use case either given to a team for development or turning into a procurement. Infrastructure, whether on-premise or cloud, was then deployed to accommodate the specific, forthcoming, solution.

Ecosystem-oriented architecture (EOA) inverts this approach. Ecosystem architects seek first to build a cloud ecosystem, that is, a collection of interconnected technical services that are flexible or “composable,” re-usable, and highly scalable. That ecosystem then expands, contracts, and is adapted over time to accommodate the workloads deployed within in. EOA is ideal for scaling AI because it promotes data consolidation as a first principle, avoiding the de-consolidation that point solutions tend to promote via the use of data services tied specifically to the application itself and point-to-point data integrations with other point solutions.

Join me—Andrew Welch—with HSO’s "Dynamics Matters” podcast host Michael Lonnon for part five of our AI strategy miniseries as we return to the centrality of data in this episode, talking Microsoft Fabric and the future roadmap for Microsoft’s data + AI technologies, the importance of ecosystem-oriented architecture (EOA) to scaling artificial intelligence, the need for organizations to change how they budget technology initiatives, and how to organize an IT team for the age of AI as the IT Tower of Babel rears its ugly head once again.

Previous
Previous

Diving in on “Crafting your Future-Ready Enterprise AI Strategy”

Next
Next

Whitepaper: “Crafting Your Future-Ready Enterprise AI Strategy”