Five strategies to integrate Power Platform in your data platform architecture

The below has been excerpted and adapted from my recent white paper, Power Platform in a Modern Data Architecture. This paper works through several approaches in addition to the three outlined below. So take a read and enter the world of all things data architecture.

For all the talk about Power Platform as a ‘’low-code’’ tool (and this is the last time I will use the word), for all the attention given to how supposedly easily it allows non-technical users to create simple apps, Power Platform’s greatest value lies not in the app, but in the data the app collects or serves back to its users. Power Platform isn’t an app phenomenon. It’s a data phenomenon.

The white paper takes on the question of how Power Platform integrates with Azure data services including Microsoft Fabric, outlining five patterns that organizations ought to mix and match to extract Power Platform’s greatest value. This is not a technical manual, for you’ll find the most up-to-date of those in Microsoft’s technical documentation. Rather, the paper’s goal is three-fold, to guide:​

  • CIOs and other decision makers maximizing the value of Power Platform development as part of their modern data platform;​

  • Enterprise Architects architecting across their organization’s cloud ecosystem, seeking to accelerate development and derive the benefits of composable solutions with Power Platform integrated to the organization’s data estate; and​

  • Cloud Solution Architects (CSAs) architecting solutions that require the integration of Power Platform and their modern data platform.

Foundational Considerations

There’s an instinct, when comparing architectural models, to think that we ought to pick one and run with it. Disabuse yourself of that notion now. In creating the approaches we’ll discuss on the pages that follow, our goal was not to lead IT organizations towards selecting one model to rule them all. Rather, it is to dive deeper into the architectural approaches required to enable a variety of scenarios where hydrating Power Platform solutions with enterprise data (and vice versa) are important.​

The strategy is in the ways that each organization chooses to mix, match, and combine approaches of integrating Power Platform with their broader data platform. ​

Here we considered several factors:​

  • Performance: How does data integration built to this standard perform, particularly at scale and across multiple workloads?​

  • Flexibility: How flexible is the data integration pattern when called on to absorb unforeseen or changing requirements for the associated workloads and the integration components themselves?​

  • Maintainability: How robust are the tools available to monitor, manage, maintain, govern, and secure the data transacted as part of any given integration?​

  • Data as an asset: To what extent can different types of value be extracted from data transacted through any given data integration pattern?​

  • Workload criticality: On a scale from workloads limited in use to or focused on individual and team productivity to workloads that could be defined as “core business systems”, how critical are the workloads that we would generally entrust to any given integration pattern?​

  • Initial cost of implementation: How complex and, accordingly, how high is the initial cost of implementation for any given integration pattern? You will see in the pages that follow that we begin with data integration via out-of-the-box “data connectors” and end with complete integration of Power Platform to a modern data platform architecture.​

It's difficult to draw a straight line through all these considerations, but these lenses allowed us to create five patterns that can be mixed, matched, and combined within a typical cloud ecosystem where Power Platform is a major component.

Note in the diagram below that, in general, performance, flexibility, maintainability, data as an asset, workload criticality, and initial cost of implementation increase as we move from left to right in our spectrum of integration patterns. 

These patterns for data integration between Power Platform and Azure data services generally increase in performance, flexibility, maintainability, data as an asset, workload criticality, and initial cost of implementation moving left to right.

Let’s quickly summarize these patterns below.​

  • Point-to-Point integration, what many think of when architecting data integrations for Power Platform, is best suited for “compact workload” scenarios where data moves from a single source to a single, tightly aligned workload or application. Illustrative examples of this include scenarios where an app is built to extend a larger primary workload. Think of a Power App on a tablet used at a recruiting fair to register job candidate interest, which then feeds data back to a HRIS or applicant tracking system (ATS), a Power App on a phone used to log maintenance or safety checks that are then fed back to an asset management system. A salient feature of most Point-to-Point scenarios is that Power Platform rarely acts as the system of record, and often does not hold the data directly, instead preferring to permanently house and store the data with the primary workload.​

  • Data Consolidation whereby data is migrated one-time or via ongoing integration to Dataverse, upon which Power Platform workloads are built. This approach shifts much of the data processing burden away from the application at runtime (common in Point-to-Point) and instead relies on integration services and on Microsoft Dataverse to do the heavy lifting of data processing, calculations, and the like. It’s common when the workload in question is bringing together hitherto disconnected “quasi apps” such as spreadsheets or homegrown Access databases. When the workload requires data hydrated from disparate core business systems, think of an app that facilitates the issuing of equipment to new hires using a combination of data from the HRIS and an asset management system.​

  • Master Data Node involves the connection of Dataverse environments to enterprise Master Data Management (MDM) tools or some other operational data store as peers to other mastered data. In other words, let’s say that an organization has used an MDM tool such as Profisee or CluedIn to “master”, to create a “golden record” of customer, workforce, and product data housed in its CRM, HRIS, and ERP solutions respectively… and we want to build custom Power Platform solutions that consume or write back this data. Our Master Data Node pattern suggests that we connect our Power Platform environment(s) and the Dataverse instances they contain—presumably using the integration patterns you’ve already established for your other core business systems—to MDM as a peer of CRM, HRIS, and ERP.​

  • Data Landing Zone scales the Master Data Node pattern by integrating data between MDM and “Data Landing Zone(s)” built with Dataverse, which in turn distribute that data to child Dataverse environments. This approach mitigates the risk that the Master Data Node model grows to a point where many Power Platform environments attempt to transact directly with MDM, essentially creating a scenario where every Power Platform solution is equal in its criticality to your core business systems. Rather, the Data Landing Zone makes a subset of mastered data available to Power Platform solutions without the need for every workload to be constantly transacting with the MDM solution itself.​

  • Data Distribution integrates Power Platform workloads to downstream distribution scenarios such as analytical workloads, and distribution via API, enterprise search, retrieval-augmented generation (RAG) for AI-infused workloads, etc. Essentially the most sophisticated and seamless integration of Power Platform to the modern data platform, Data Distribution plugs Power Platform into what in ecosystem-oriented architecture we call a “Data Distribution Neighborhood,” making that data available for a wide range of repurposing and value extraction. We’ll dig into this in more detail later, but I should note here in the summary that whilst our first four patterns deal primarily with transacting data between different workloads, Data Distribution deals primarily with making data in Power Platform available for value extraction alongside data that might exist in any other technology (e.g., third party data services, Cosmos DB, Azure SQL, etc.) deployed within your cloud ecosystem.​

The whitepaper pages that follow explore each of these patterns through the presentation of reference or sample architecture, discussion of advantages and disadvantages, and a summary of representative techniques and technologies that can be employed as part of each pattern.​ For conciseness in the current blog format, we will discuss here three which are evolutions of one another: Data Consolidation, Master Data Node, and Data Landing Zone.

Data Consolidation

In Data Consolidation scenarios, data is migrated one-time or via ongoing integration to Microsoft Dataverse, the core, premium data service upon which more sophisticated Power Platform workloads are built. This approach shifts much of the data processing burden away from the application at runtime, common in Point-to-Point, and instead relies on integration services and on Dataverse itself to do the heavy lifting of data processing, calculations, and the like. It’s common in scenarios where:​

  • The workload in question is bringing together hitherto disconnected “quasi apps” such as spreadsheets or homegrown Access databases;​

  • The workload requires data hydrated from disparate core business systems, think an app that facilitates the issuing of equipment to new hires using a combination of data from the HRIS and an asset management system;​

  • The Power Platform workload is itself a core business system, and our goal is to consolidate its data into a single data service rather than forcing our tier 1 system to be beholden to non-native data services.

Reference architecture for a typical “data consolidation” scenario. Note that one-time migration of data is indicated by a dotted line, and ongoing integration of data is indicated by a solid line.

On the left we have our “Power Platform Solutions”—the apps, flows, portals, bots, BI components, etc. that have been built by and deployed atop Power Platform. These solutions are sourcing their data natively from Dataverse, which we can think of as our single source of truth for data transacted by our Power Platform solutions. So, rather than hydrating the solutions themselves with data from disparate sources, we’re using various techniques to consolidate that data to Dataverse. We’ve identified several scenarios for illustrative purposes below.

  • Dynamics 365 Finance and Operations (Microsoft’s ERP solution colloquially known as “F&O”) sits at twelve o’clock in the top center of the diagram. Dynamics is using a Microsoft technology called “Dual Write” to write data directly to Dataverse.

  • Moving counterclockwise in the diagram, we see three legacy data stores whose data might be migrated once to Dataverse at the initial deployment of the Power Platform solution, with the assumption that the legacy stores themselves would then be retired. Among them:

    • Microsoft Access, whose data can be migrated to Dataverse using a tool that Microsoft has purpose built for the Access to Dataverse transition;

    • SQL Server, which we might use Azure Data Factory to migrate one-time into Dataverse;

    • Microsoft Excel, for which Microsoft provides a tool to directly import data from an Excel workbook to Dataverse.

  • We finally have two cloud services representative of the many that we might choose to integrate to Dataverse on an ongoing basis. This is to say that—like the data in Dynamics 365 F&O—we may not want to simply migrate once and retire the legacy system, rather, there are many scenarios where we want to consolidate data from data services that will live on after deploying the Power Platform solution(s). In this case we are using Azure Data Factory to integrate data from Azure SQL (perhaps sitting beneath a custom web app) and the third-party Workday human resources information system.

A distinct advantage of consolidation and the other models that will follow versus Point-to-Point is that logic, data processing, calculations, etc. are offloaded to Dataverse—which does this very well—rather than being processed at real-time as can happen in Point-to-Point scenarios.

Dataverse further simplifies application development by serving as a data orchestration layer and by enabling solutions to use Dataverse-dependent components.

Master Data Node

Our third model begins to solve two unresolved issues:

  • Scalability issues introduced by widespread use of ongoing data integrations in consolidation scenarios; and

  • The need for many Power Platform solutions to be hydrated with mastered enterprise data, which is to say, data domains mastered in a “Master Data Management” (MDM) solution such as an organization’s master customer or employee data.

Master Data Node involves the connection of Dataverse environments to enterprise MDM tools or an operational data store as peers to other mastered data. Let’s say that an organization has used an MDM tool such as Profisee or CluedIn to “master”, that is, to create a “golden record” of customer, workforce, and product data housed in its CRM, HRIS, and ERP solutions respectively… and we want to build custom Power Platform solutions that consume or write back this data. Our Master Data Node pattern suggests that we connect our Power Platform environment(s) and the Dataverse instances they contain to MDM as a peer of (for example) CRM, HRIS, and ERP. This pattern presumes that you’ve already established or are in the process of establishing standards-based integration patterns between MDM and the other mastered systems of record, and that these patterns can be readily applied to any given Dataverse environment.

Reference architecture time!

Reference architecture for a typical “master data node” scenario wherein each paired Power Platform environment (and its Dataverse instance) shown on the right is a peer node to the other systems shown on the left.

In this architecture, we see an enterprise MDM solution represented by the generic icon in the bottom center. This is our “single source of truth” for data transacted across multiple core business systems within the organization. It’s integrated to the broader cloud ecosystem via an “Integration Neighborhood” that we can broadly summarize as the collection of data integration services used repeatedly throughout the ecosystem, in our architecture represented by (left to right) Event Grid, Service Bus, Logic Apps, Azure Functions, and Azure Data Factory.

Along the left-hand side we see five core business systems that are illustrative of some upon which many organizations rely:

  • A core business system built using Azure SQL as its data service;

  • A core business system built using Cosmos DB as its data service;

  • Dynamics 365 F&O for enterprise resource planning (ERP);

  • Workday as the HR Information System (HRIS);

  • SalesForce for Customer Relationship Management (CRM).

Don’t take this literally, and certainly do not interpret the mention of a particular core system as an endorsement. We’re simply creating a representative profile of core business systems that might exist in a real-world cloud ecosystem. You might run ERP with SAP, or you might run CRM with Dynamics 365. The particulars are not the point, here.

Regardless, these core systems have been connected to MDM in order to hydrate MDM with or consume data that is mastered in the MDM solution.

Likewise, on the right side of the diagram we see three Dataverse environments connected to MDM via the Integration Neighborhood, thereby creating the mechanism for Power Platform solutions deployed to those environments to contribute their data to the mastered domains and / or to consume data from the mastered domains. In this pattern, each Power Platform environment is a peer of the core business systems on the left.

Finally, we have Microsoft Purview in the top center, depicted above the Integration Neighborhood. Purview is an essential technology as you scale your data platform, with or without Power Platform. Microsoft is investing heavily in this technology as its principal service for data governance, lineage, cataloging, etc. It is connected to each compatible data source—including favorites such as Dataverse and Azure SQL—to which wish to apply enterprise data governance policies. I absolutely recommend deploying Purview in connection to Power Platform regardless of which data integration patterns you employ, but this is an essential requirement by the time you’ve reached the sophistication of our Master Data Node pattern. So, I’m elaborating on it here.

This is also an ideal point at which to suggest that the various integration patterns discussed in this paper can (and should) be mixed and matched. Set out in the whitepaper is architecture for both Master Data Node with a Data Consolidation pattern. Have a read to learn more.

Data Landing Zone

We can further scale the Master Data Node pattern when we introduce the “Data Landing Zone”. Here we are integrating data between MDM and “Data Landing Zone(s)” (DLZ) built with Dataverse, which in turn distributes that data to child Dataverse environments. The DLZ becomes an intermediary that regulates the flow of data between MDM and downstream Power Platform environments containing—often—solutions that are less mission critical than our core business systems (e.g., “Tier 2 – Important” or “Tier 3 – Productivity” applications). ​

This approach mitigates the risk that the Master Data Node model grows to a point where many Power Platform environments attempt to transact directly with MDM, which would essentially create a scenario where every Power Platform solution is equal in its criticality to your core business systems. Rather, the DLZ makes a subset of mastered data available to Power Platform solutions without the need for every workload to be regularly transacting with the MDM solution itself.​

Reference architecture for a typical “data landing zone” scenario wherein a “one ring to rule them all” Dataverse environment intermediates the integration between child Dataverse “nodes” and master data itself.

The DLZ promotes scale by reducing the burden of many Dataverse ”nodes” connecting to the integration neighborhood and master data management solution. It also enables fusion teams (i.e., teams that combine professional engineers with “citizen developer” business users to create solutions) to better work with enterprise data using out of the box integration tools such as virtual tables or data flows.​

This pattern can also improve data security within the closed circuit of Power Platform environments connected to the DLZ. We’ve said before that Dataverse includes an extraordinarily robust security model based on role-based access control (RBAC) and data modelling capabilities that are not generally honored outside the logical boundaries of Dataverse because the similar capabilities included in most other data services is not nearly so robust. By staging a subset of master data in the Dataverse-based DLZ, we’re able to implement a data model and implement RBAC closer to the root, which can reduce friction in the child environments. We’d achieve this by creating an “institutional data model” (IDM), essentially a base Power Platform solution that contains a data and security model common to many organization-specific use cases, deploying the IDM in the Data Landing Zone, and then deploying the same IDM in child environments.

Explore these strategies more in-depth, and read further about the use of Point-to-Point and Data Distribution patterns, in the Power Platform in a Modern Data Architecture white paper.

Previous
Previous

Microsoft Power Platform Conference 2024 in Las Vegas 🇺🇸

Next
Next

White Paper: “Scaling your Enterprise Cloud with Power Platform”