RAG and the fundamentals of AI acting on enterprise data

The below has been excerpted and adapted from my recent whitepaper, Crafting your Future-Ready Enterprise AI Strategy, published with Microsoft in January 2024. There’s much more in the paper, so give it a download, pour yourself a drink, and enjoy the read.

My Ecosystems podcast co-host Mark Smith and I recently chatted about ways that CIOs, other leaders and decision makers, and others can ground themselves with enough knowledge of generative AI such that they can speak intelligently and understand what they must in order to lead their organizations into the era of artificial intelligence. My first response, of course, is to read the whitepaper, because that’s why I wrote it. But so it is that I have lately found myself explaining again and again how it is that AI acts on the vast troves of data found in most organizations.

Let’s proceed, then, to establish a basic understanding of how AI uses and acts on that enterprise data. We will define 'enterprise data' as data that is proprietary to a specific organization, kept and (I certainly hope!) secured inside the boundary of the organization’s data estate.

I’ve adapted and updated the model shown here from Pablo Castro’s great piece, Revolutionize your Enterprise Data with ChatGPT: Next-gen Apps w/ Azure OpenAI and Cognitive Search (March 2023), on Microsoft’s Azure AI services Blog.

The most basic concept behind institutional AI: Enterprise data (raw knowledge) is stored such that it can be (a) indexed and (b) accessed by AI. Capabilities such as Azure AI services act on that knowledge to produce a response.

In the top-right of the diagram we’re looking at various data sources sitting in a modern data platform (Azure SQL, OneLake, and Blob Storage are shown top to bottom for representative purposes). I’ll point out here that Blob is a highly efficient way to store unstructured data, that is, files, images, videos, documents, etc. In this simple scenario we’ll say that unstructured data is drawn from Blob. 

These data sources are indexed by Azure AI Search (formerly called “Azure Cognitive Search”), which also provides an enterprise-wide single search capability. Moving to the far left we see an application user experience (UX)—e.g., a mobile, tablet, or web app—that provides an end user the ability to interact with our workload.

The application sitting beneath the UX queries the knowledge contained in Azure AI Search’s index (as derived from the data sources on the right). It then passes that prompt and knowledge to Azure AI services to generate an appropriate response to be fed back to the user.

This approach is what we call “Retrieval Augmented Generation” or “RAG”, which you may have heard of. The name is quite literal: Here we are augmenting the generative pre-trained (and now you know what “GPT” stands for) model with data that we have retrieved from the organization’s data estate.

Of course, most organizations don’t have a neat and tidy data landscape where the whole of their organizational data is cleanly tucked away inside of Azure SQL, OneLake, and Blob Storage as our diagram suggests. For all the advancements in cloud technology of the last decade, most organizations are home to vast unconsolidated stores of data. Your data lives in OneDrive, spreadsheets, desktops, one-off databases often sitting beneath point solutions, and—if you’re lucky—some of it lives in lakes, warehouses, lake houses, and properly managed databases.

But data is the essential fuel without which AI models cannot be trained nor have the capacity to act on the information that makes them valuable.

Data consolidation (a pillar of AI strategy about which you can learn more in the whitepaper) refers specifically to the consolidation of data from across your cloud estate into storage technologies that can be accessed and used by AI (such as the SQL, lake, and Blob examples cited in the diagram above). This is likely to be achieved through a variety of techniques including: copying, one-time migration with the intent to retire the legacy data source, data integration (which is, implicitly, ongoing), standardization on one or a small number of future-ready transactional data services for app dev, and employing “shortcuts” (in Microsoft Fabric) through which data is shortcutted from its source into OneLake (analogous to how a file in OneDrive may be shortcutted from its source to another location). 

So, returning to our earlier model, most organizations are likely to land on a data consolidation architecture that looks something like the diagram shown below. Here we see data migrated (dotted line) out of, for example, Access databases and Excel files into Dataverse, on-premise SQL into Azure SQL, or network storage into Azure Blob. You’ll then find yourself with a fairly sizable transactional data estate underpinning most of your applications whose data flows downstream to services such as OneLake. For example, data from Azure SQL is pushed or shortcutted into the lake.

A notional architecture for data consolidation in practice working with the AI model we discussed earlier.

CIOs and enterprise architects need not be experts in the technical mechanics of AI to formulate and execute an effective AI strategy. That said, it is critical that leaders driving this strategy understand this basic concept of how institutional AI—that is to say, AI workloads specific to your organization—both requires and acts on enterprise data. 

Without that data, it’s just AI, unspecific to the organization it is serving. 

I invite you to explore more of this and related topics in the Crafting your Future-Ready Enterprise AI Strategy whitepaper.

Previous
Previous

Foundations, investment trends, and how CXOs can lead in the age of AI

Next
Next

The road ahead for organizations pursuing their strategic AI aspirations