Skip to content

DIALOGE — I: Integration

Integration is how your solution talks to the world. Design it well and it scales. Design it poorly and it becomes your biggest liability.

TL;DR

Design integrations for resilience, not just the happy path. Use standard connectors where they exist. Always use connection references (never hardcoded connections). Implement error handling on every flow. Use service accounts for credentials. Choose the right pattern: direct connector, Power Automate orchestration, Custom API, Service Bus, or VNet — based on the actual requirements.

Applies To

Audience: Solution Engineer · Platform Lead BOLT Tiers: Tier 2–4 Maturity: Basic → Advanced Frameworks: DIALOGE · SHIELD (Inspect — connector review) · BOLT (connector library)


What Integration Means in DIALOGE

Every enterprise solution connects to something. An existing system of record. An external service. A data source in another department. Another internal application. A third-party API.

Integration defines how those connections are designed, secured, governed, and maintained. It is not just about making two systems talk — it is about making them talk reliably, securely, and in a way that does not create fragility.

A solution that works perfectly in isolation but breaks when a connected system changes is not an enterprise solution. It is a liability waiting to be triggered.


Why Integration Decisions Matter

Integration is where most enterprise solutions get complex — and where most failures originate:

  • A flow that calls an external API directly breaks when the API changes its structure
  • A solution using a personal account's connector credentials breaks when the person leaves
  • An integration with no error handling silently fails — data is lost and nobody knows
  • A direct system-to-system connection creates tight coupling — changing one system requires rebuilding the integration
  • A custom connector built without security review exposes internal APIs to ungoverned access

Getting integration right means designing for resilience, observability, and change — not just for the happy path.


Key Decisions Every Builder Must Make

Before building any integration, answer these:

  • What systems need to connect? Map every system your solution touches — internal and external
  • Who initiates? Does your solution pull data from another system, or does another system push data to yours?
  • Real-time or batch? Does the integration need to happen instantly, or is periodic synchronisation acceptable?
  • One-way or bidirectional? Is data flowing in one direction or both? Bidirectional integrations require conflict resolution strategies
  • What happens when it fails? Every integration will fail at some point — what is the retry strategy, the error notification, the fallback?
  • Standard connector or custom? Is there an existing approved connector, or do you need to build one?
  • Direct or intermediary? Should systems connect directly, or via an API gateway, service bus, or middleware layer?
  • Sensitive data in transit? What data crosses the integration boundary and how is it protected?

Maturity Levels

Level Description Suitable For
Basic Direct connector, single system, minimal error handling. Works in controlled conditions but fragile under load or change. Personal productivity, low-stakes internal solutions with forgiving failure tolerance
Intermediate Structured connections with error handling, retry logic, and basic monitoring. Alerts on failure. Team or departmental solutions where failures are noticeable and recoverable
Advanced Decoupled, event-driven, observable, resilient, and governed. Designed for change. Failures are caught, logged, and recoverable without data loss. Enterprise solutions, regulated workloads, mission-critical processes

Safe Zone

Solutions using only standard approved connectors to internal systems within the same tenant, with non-sensitive data, can operate at Basic or Intermediate maturity.

Any solution that meets one or more of the following must reach Advanced maturity before Go-Live: - Connects to external systems outside the tenant - Uses premium or custom connectors - Handles sensitive, regulated, or personally identifiable data in transit - Is mission-critical — failure has material business impact - Is bidirectional — writes back to a system of record


Power Platform Integration Options

Option When to Use When Not to Use
Standard connectors Connecting to common Microsoft and third-party services — SharePoint, Teams, Outlook, Salesforce, ServiceNow, and 600+ others When the connector does not exist or does not expose the capability you need
Premium connectors Connecting to enterprise services requiring premium licensing — SQL Server, Dataverse, HTTP, Azure services When licensing cost is not justified for the use case
Custom connectors Connecting to internal or external APIs not covered by standard connectors When a standard connector already exists — avoid building what already exists
On-premises data gateway Connecting to systems behind your corporate network — on-prem SQL, SharePoint on-prem, file shares Cloud-to-cloud integrations where no network boundary exists
Power Automate as integration layer Orchestrating multi-step processes, transformations, and conditional logic between systems Simple single-step data reads that can be handled by a direct connector in-app
Dataverse Web API Programmatic access to Dataverse data from external systems, pro-code components, or Azure services Simple in-platform operations that can use the native Dataverse connector
Virtual tables Surfacing external data inside Dataverse without copying it — making external records appear as native Dataverse tables When real-time external data access latency is unacceptable or when write-back to the external system is not needed
Azure Service Bus / Event Grid Event-driven, decoupled, high-volume, or asynchronous integration patterns Simple synchronous integrations where event-driven complexity is not justified
Azure Virtual Network (VNet) integration Regulated workloads requiring all traffic within the network perimeter — no public internet traversal Standard workloads where network perimeter containment is not required
Dataverse Web API (OData) External systems, Azure services, or pro-code components accessing Dataverse programmatically In-platform operations where the native Dataverse connector is available
Custom Actions / Custom APIs Reusable server-side business logic callable by apps, flows, and external systems Single-use logic that does not need to be shared or exposed as an endpoint
Power Query / Dataflows Scheduled ETL from external sources into Dataverse — moderate volume, maker-accessible High-volume enterprise ETL where Azure Data Factory is more appropriate
Azure Data Factory Enterprise-scale ETL and data migration into and out of Dataverse Moderate-volume scenarios where Dataflows are sufficient
Azure Synapse Link Continuous export of Dataverse data for analytics, reporting, and data science workloads Operational data access — Synapse Link is analytics-only, not a backup
Azure API Management Centralised API governance, rate limiting, security policy enforcement for custom APIs Single-system integrations where API management overhead is not justified

Deep Dive — Power Platform Integration Patterns

Standard and Premium Connectors

Connectors are the primary integration mechanism in Power Platform. They abstract the complexity of API calls, authentication, and data transformation behind a consistent interface.

How connectors work: - Each connector represents a connection to a specific service or system - Connectors expose actions (do something — create a record, send an email) and triggers (something happened — a new record was created, a file was uploaded) - Connectors handle authentication — OAuth, API key, basic auth — so builders do not need to manage tokens manually

Connector tiers: - Standard connectors — included with Microsoft 365 licences. Cover the most common Microsoft and popular third-party services. - Premium connectors — require Power Apps or Power Automate per-user/per-flow licences. Cover enterprise services including Dataverse (outside the same environment), SQL Server, HTTP requests, and many third-party enterprise platforms. - Custom connectors — built by your organisation to connect to internal or external APIs not covered by standard connectors. Require governance treatment — see Custom Connectors section below.

Connection references: When building solutions intended for deployment across environments (Dev → Test → Production), always use connection references rather than direct connections. Connection references decouple the connector credentials from the solution logic — allowing connections to be updated per environment without reimporting the solution.

This is a critical ALM practice. Solutions built without connection references cannot be cleanly promoted across environments.

DLP policy interaction: Every connector is subject to DLP policy classification. Connectors in different DLP buckets (Business vs Non-Business) cannot be used in the same app or flow. Understanding which connectors are in which bucket — and why — is essential before designing an integration.

Connection to SHIELD: Connector governance is part of the Inspect pillar — every solution's connector usage must be reviewed against the approved connector catalogue before Go-Live.


Custom Connectors

Custom connectors allow Power Platform to connect to any API that is not covered by the standard connector library. They are powerful — and require careful governance.

When to build a custom connector: - Your organisation has an internal API that needs to be accessible from Power Platform - A third-party service is not covered by an existing connector - You need to expose a subset of an existing API with specific security or transformation requirements

What a custom connector is: A custom connector is an OpenAPI (Swagger) definition that describes an API's endpoints, parameters, authentication mechanism, and response structure. Power Platform uses this definition to generate the connector interface that makers use.

Custom connector governance — why it matters: Custom connectors are not subject to DLP connector classification by default. They require explicit governance treatment: - Every custom connector must go through a formal review and approval process before being made available to makers - The API it connects to must be assessed for security, data handling, and compliance implications - Custom connectors must use secure authentication (OAuth 2.0 preferred; API keys acceptable with key management; basic auth should be avoided) - Custom connectors should be owned by a named team — not an individual - Custom connectors must be included in the DLP policy framework explicitly

Connection to SHIELD: Custom connector review is part of the Inspect pillar — no custom connector should be available to makers without security sign-off.


On-Premises Data Gateway

The on-premises data gateway enables Power Platform to connect to systems behind your corporate network perimeter — on-premises SQL Server, Oracle, SAP, file shares, and other systems not exposed to the public internet.

How it works: The gateway is a software agent installed on a server within your network. It acts as a bridge — Power Platform sends requests to the gateway service in the cloud, which forwards them to the on-premises system and returns the response. No inbound firewall rules are required.

Gateway governance: - Gateways should be installed on dedicated, managed servers — not developer laptops - Gateway clusters provide high availability — single gateway instances are a single point of failure for all dependent solutions - Gateway access should be controlled — not all makers should be able to use all gateways - Gateway credentials should use service accounts, not personal accounts - Monitor gateway health and performance — gateway bottlenecks impact every solution that depends on them

When to use the gateway vs VNet integration: The on-premises data gateway is appropriate for most scenarios. For regulated workloads requiring all traffic to remain within the network perimeter without traversing the public internet, consider Power Platform's Azure Virtual Network (VNet) integration instead — which routes traffic through your Azure VNet rather than the gateway service.


Virtual Tables

Virtual tables (formerly known as virtual entities) allow external data to appear as native Dataverse tables — without copying the data into Dataverse.

How they work: A virtual table is backed by a data provider — a connector or custom plugin that translates Dataverse queries into calls to the external system in real time. From the perspective of any app or flow using the table, it looks and behaves like a standard Dataverse table.

When virtual tables are the right choice: - The data already lives in a system of record and should not be duplicated - Real-time data from the external system is required — not a periodic sync - You need to surface external data in model-driven apps or Dataverse-native interfaces without an ETL process - The external system supports the query patterns needed (not all systems perform well as virtual table sources)

Virtual table limitations: - Performance depends entirely on the external system's response time — slow APIs make slow virtual tables - Not all Dataverse features work with virtual tables (advanced find limitations, some rollup/calculated column restrictions) - Write-back to the external system requires the data provider to support it - Virtual tables cannot be used as lookup targets in some relationship configurations

Virtual tables vs Power Automate sync: If real-time access is not required and the external system's performance is a concern, consider a scheduled Power Automate flow that syncs external data into a real Dataverse table periodically. This trades real-time freshness for performance and full Dataverse feature compatibility.


Power Automate as Integration Layer

Power Automate is the primary orchestration and integration engine in Power Platform. For multi-step processes, transformations, conditional routing, and cross-system workflows, Power Automate is where the integration logic lives.

Integration patterns in Power Automate:

Trigger-action pattern: Something happens in System A → Power Automate detects it → performs actions in System B (and C, D, etc.) - New record in Dataverse → create ticket in ServiceNow - Email received in Outlook → extract data and create record in Dataverse - Form submitted in Power Apps → create record, send approval, notify via Teams

Scheduled sync pattern: On a defined schedule → retrieve records from System A → create/update records in System B - Nightly sync of customer data from ERP to Dataverse - Hourly refresh of pricing data from external API

Request-response pattern: App or external system calls a Power Automate flow via HTTP → flow performs operations → returns response - Canvas app requests a complex calculation → flow processes → returns result - External system triggers a Power Automate flow via webhook

Error handling — the most overlooked integration practice:

Every cloud flow that integrates with external systems must implement error handling. The default behaviour when a connector action fails is to stop the flow and mark it as failed — silently, with no notification.

Minimum error handling pattern: - Wrap critical actions in a try/catch scope (Configure run after: has failed, has timed out) - On failure: log the error to a Dataverse table or SharePoint list, send an alert via Teams or email to the solution owner - Define a retry strategy for transient failures (throttling, temporary unavailability) - For critical flows: implement a dead letter queue — failed records held for manual review and reprocessing

Flow run history and monitoring: All cloud flow runs are logged in Power Automate with success/failure status, trigger time, duration, and error details. For critical integrations: - Monitor flow run failure rates via the Admin Center or Application Insights - Set up automated alerts when failure rates exceed a defined threshold - Review flow run history regularly — silent failures can accumulate unnoticed


Event-Driven Integration with Azure Service Bus and Event Grid

For high-volume, asynchronous, or decoupled integration scenarios, Azure Service Bus and Azure Event Grid provide enterprise-grade messaging and eventing infrastructure.

When to consider event-driven patterns: - High message volumes that exceed Power Automate throughput limits - Integrations where the sender and receiver must be fully decoupled - Scenarios where guaranteed message delivery and ordering matter - Fan-out patterns — one event triggers multiple downstream consumers

Azure Service Bus in Power Platform context: - Power Automate has a native Service Bus connector — flows can send messages to and receive messages from Service Bus queues and topics - Dataverse has native Service Bus integration — events (record create, update, delete) can be published to Service Bus automatically via the Plugin Registration Tool - Azure Logic Apps can act as the processing layer for complex enterprise integration scenarios where Power Automate throughput is insufficient

Azure Event Grid: - Power Platform can publish events to Event Grid via Power Automate or Dataverse webhooks - Event Grid subscriptions can trigger Azure Functions, Logic Apps, or other Azure services in response to Power Platform events - Useful for cross-platform scenarios where Power Platform is one component in a broader Azure-based architecture


Private Virtual Network (VNet) Integration

For enterprise and regulated workloads, the default Power Platform connectivity model — where traffic traverses Microsoft's shared cloud infrastructure — may not meet network security requirements. Azure Virtual Network integration addresses this.

What it is: VNet integration allows Power Platform to route connector and Dataverse traffic through your organisation's Azure Virtual Network rather than over the public internet. Traffic stays within your network perimeter — it never touches the public internet and is subject to your organisation's network security controls (NSGs, Azure Firewall, private DNS).

Why it matters for enterprise: In regulated industries — financial services, healthcare, government — the requirement that data never traverse the public internet is often non-negotiable. VNet integration makes Power Platform viable for these workloads. Without it, the only alternative is the on-premises data gateway, which has its own operational overhead and throughput constraints.

When to choose VNet integration over the gateway:

Scenario Recommendation
Regulated workload requiring all traffic within network perimeter VNet integration
Connecting to on-premises systems via existing network connectivity VNet integration (if Azure connectivity exists) or Gateway
Simple on-premises data access without strict network perimeter requirements On-premises data gateway
Cloud-to-cloud integrations with no network boundary requirements Neither — direct connector

Enterprise considerations: - VNet integration requires Managed Environments — it is not available in standard environments - Each environment with VNet integration requires a dedicated subnet — plan your Azure network address space accordingly - VNet integration applies to the environment, not individual connectors — all eligible traffic in a VNet-integrated environment routes through the VNet - Network Security Group (NSG) rules must permit Power Platform service tags — work with your network team before enabling

Connection to SHIELD: VNet integration is a primary control in the Lockdown pillar — it is the enterprise mechanism for keeping Power Platform traffic within organisational network boundaries for regulated workloads.


Dataverse Web API — OData Endpoint

Dataverse exposes a fully standards-compliant OData v4 REST API — the Dataverse Web API. This is the enterprise integration endpoint for external systems, Azure services, and pro-code components that need to read from or write to Dataverse programmatically.

What it is: The Dataverse Web API is an HTTP-based REST API that follows the OData v4 standard. Every table, column, relationship, and custom action in Dataverse is accessible via this API. It is the same API that Power Apps, Power Automate, and Dynamics 365 use internally — you are accessing the same data layer, with the same security model applied.

When to use the Web API:

Scenario Use Web API?
External system needs to read or write Dataverse records Yes
Azure Function or Logic App integrating with Dataverse Yes
Pro-code component (PCF control, portal) accessing Dataverse Yes
Power Automate flow accessing Dataverse within the same environment No — use the native Dataverse connector
Canvas app reading Dataverse data No — use the native Dataverse connector

Enterprise decision guidance: - The Web API respects the full Dataverse security model — authentication via Azure AD, authorisation via security roles. There is no way to bypass Dataverse security through the Web API. - Always use an application user (Azure AD app registration) for system-to-system Web API calls — never a personal user account - The OData query capability is powerful but has performance implications at scale — use $select to limit columns returned, $filter to restrict rows, and $top to page results. Unfiltered queries against large tables are a common performance problem - For high-volume write operations, use the batch endpoint — bundling multiple operations into a single HTTP request reduces latency and API call overhead - Rate limits apply — understand the API limits for your licence tier before designing high-volume integrations

Strategic framing: The Web API is what makes Dataverse a true enterprise data platform — not just a Power Platform data store. External systems can integrate with it using standard HTTP tooling. Azure services can read and write to it. The security model travels with the data. This is the right answer when external systems need a reliable, governed, auditable integration point into your Power Platform data layer.


Custom Actions and Custom APIs

As enterprise solutions mature on Power Platform, there is often a need to expose business logic as callable endpoints — operations that external systems, canvas apps, or flows can invoke without needing to know the underlying implementation. Custom Actions and Custom APIs are the Dataverse-native mechanisms for this.

What they are:

Custom Actions (classic): Custom Actions are plugin-backed operations registered on Dataverse that expose business logic as a callable message. They predate Custom APIs and are still widely used. A Custom Action can be called from Power Automate, canvas apps (via the Dataverse connector's "Perform a bound/unbound action" capability), and external systems via the Web API.

Custom APIs (modern — recommended for new development): Custom APIs are the modern replacement for Custom Actions. They provide a cleaner developer experience, better metadata, support for function vs action semantics, and are the strategic direction for Dataverse extensibility. New enterprise solutions should use Custom APIs rather than Custom Actions.

When to use Custom Actions / Custom APIs:

Scenario Recommendation
Business logic that must execute server-side, close to the data Custom API
Operation that multiple apps and flows need to call consistently Custom API
Complex validation or calculation that should not be duplicated across apps Custom API
Exposing a business capability to external systems via Web API Custom API
Legacy solution already using Custom Actions Custom Action (maintain existing; migrate to Custom API on rebuild)
Simple single-app logic with no reuse requirement Power Automate flow or canvas app formula

Enterprise decision guidance: - Custom APIs are the right choice when logic needs to be reusable, consistent, and centrally governed — they create a stable contract that multiple consumers can depend on - Think of Custom APIs as enterprise microservices within Dataverse — they encapsulate business logic behind a versioned, secured interface - Custom APIs execute within the Dataverse transaction — they can participate in atomic operations with record changes, which Power Automate flows cannot - Custom APIs require plugin development — this is Solution Engineer territory, not Solution Maker territory. The decision to build a Custom API should involve the platform team - Security: Custom APIs respect Dataverse security roles — callers must have the appropriate privileges. Design security explicitly; do not assume calling the API is sufficient authorisation - Every Custom API should be documented: what it does, what it accepts, what it returns, who owns it, and what solutions depend on it

Calling Custom APIs from Power Platform: - From Power Automate: use the Dataverse connector's "Perform an unbound action" or "Perform a bound action" step - From canvas apps: use the Power Apps Dataverse connector with the action invocation capability, or call via Power Automate as an intermediary - From external systems: call via the Dataverse Web API endpoint using standard HTTP — the Custom API appears as a standard OData action

Strategic framing: Custom APIs represent the maturity point where a Power Platform solution stops being a collection of apps and flows and becomes a governed business platform — with stable, documented, reusable business logic that the rest of the enterprise can depend on. For enterprise architects evaluating Power Platform's extensibility, Custom APIs are the answer to "how do we build something that doesn't become a black box?"


ETL — Extract, Transform, Load

Enterprise Power Platform implementations frequently involve significant data movement — migrating data from legacy systems, feeding Dataverse from enterprise data lakes, exporting Dataverse data for analytics, and synchronising data across platforms. ETL (Extract, Transform, Load) is the discipline that governs this data movement at scale.

The ETL decision — when real-time integration is not the answer: Not every data movement scenario requires real-time integration. ETL is the right pattern when: - Data volumes are too large for event-driven connector-based integration - Source systems do not expose real-time APIs or webhooks - The consuming system needs a transformed, cleansed, or aggregated view of source data - Analytics workloads need a copy of operational data without impacting source system performance - Data migration is required — moving historical data from a legacy system into Dataverse as a one-time or phased operation

Power Platform ETL options:

Power Query and Dataflows: Power Query is the low-code ETL engine embedded in Power Platform. Dataflows allow makers and Solution Engineers to define data transformation logic — connecting to source systems, applying column mappings, filtering, and transformations — and loading the result into Dataverse tables on a scheduled basis.

Dataflows are the right choice for: - Regular scheduled data loads from external sources into Dataverse - Data cleansing and standardisation before loading into the data model - Maker-accessible ETL without requiring Azure Data Factory expertise - Moderate data volumes — dataflows are not designed for enterprise-scale bulk operations

Azure Data Factory: Azure Data Factory is Microsoft's enterprise ETL and data integration service — designed for large-scale, complex data movement across on-premises and cloud systems. For organisations with existing Azure Data Factory pipelines, Dataverse is a supported sink and source — data can be loaded into and extracted from Dataverse at scale via the Dataverse connector for ADF.

ADF is the right choice for: - Enterprise-scale data migration — millions of records from legacy systems into Dataverse - Complex multi-source ETL pipelines feeding Dataverse from data warehouses, data lakes, and on-premises systems - Organisations with existing Azure Data Factory investment and expertise - High-volume scheduled data synchronisation where Dataflow performance is insufficient

Azure Synapse Link for Dataverse: Azure Synapse Link is the strategic data export capability for Dataverse — providing continuous, incremental export of Dataverse data to Azure Data Lake Storage and Azure Synapse Analytics for analytics workloads.

Synapse Link is the right choice for: - Analytics and reporting workloads that should not run against the operational Dataverse environment - Power BI reporting against large Dataverse datasets where DirectQuery performance is insufficient - Data science and machine learning workloads requiring access to Dataverse data in Azure - Compliance and archival scenarios requiring long-term retention of Dataverse data outside the platform

The enterprise architectural principle: Synapse Link is not a backup mechanism — it is an analytics integration. Operational data lives in Dataverse; analytical workloads run against the Synapse Link export. This separation protects operational system performance from analytical query load — a critical architectural concern at enterprise scale.

Power Automate for data migration: Power Automate flows can be used for data migration — reading from a source system and creating or updating Dataverse records. This pattern is appropriate for low-to-moderate volume migrations where the transformation logic is simple and the migration can be executed incrementally. For high-volume migrations, the Dataverse Web API batch endpoint or Azure Data Factory provides significantly better throughput.

Data migration governance: Data migration is a high-risk operation — incorrect data loaded into a production Dataverse environment can corrupt the operational data model and affect live users. Enterprise data migration requirements: - Migration executed and validated in a non-production environment before production - Source data quality assessed and cleansed before migration - Rollback plan defined — what happens if the migration produces incorrect results in production - Record count and key field validation after migration — verify what was loaded matches what was expected - Migration history logged — what was migrated, when, by whom, from what source


Common Mistakes

  • No error handling — flows that fail silently, losing data with no notification. The most common and most damaging integration mistake.
  • Personal account credentials on connectors — integrations break when the person leaves. Always use service accounts and application users.
  • Direct connections without connection references — solutions that cannot be cleanly promoted across environments because connections are hardcoded.
  • Building custom connectors without governance — custom connectors made available to all makers without security review, bypassing DLP controls.
  • Tight coupling — direct system-to-system connections with no intermediary. One API change breaks everything that depends on it.
  • No retry logic — integrations that fail permanently on transient errors (throttling, temporary unavailability) that a simple retry would have resolved.
  • Ignoring connector DLP classification — building flows that mix Business and Non-Business connectors, then discovering at deployment that DLP policy blocks them.
  • Using the gateway on a developer laptop — gateway installed on a personal machine, creating a single point of failure and a governance gap.
  • Not monitoring flow run history — critical integrations running unmonitored, with failures accumulating unnoticed for days or weeks.
  • Duplicating data instead of integrating — copying records from a system of record into Dataverse and then managing synchronisation complexity. If the data already exists somewhere authoritative, integrate with it.

Integration Readiness Checklist

Design - [ ] All connected systems mapped — internal and external - [ ] Data flow direction defined — push, pull, or bidirectional - [ ] Real-time vs batch decision made and justified - [ ] Failure scenario defined — what happens when the integration fails? - [ ] Retry and error handling strategy designed

Connectors and Credentials - [ ] Standard connector used where available — no unnecessary custom connectors - [ ] Connection references used for all connectors — no hardcoded connections - [ ] Service accounts or application users used — no personal account credentials - [ ] Custom connectors reviewed and approved through the governance process - [ ] All connectors verified against DLP policy — no Business/Non-Business mixing

On-Premises Gateway (if applicable) - [ ] Gateway installed on a managed, dedicated server — not a developer machine - [ ] Gateway cluster configured for high availability - [ ] Gateway access controlled — not open to all makers - [ ] Gateway health monitoring configured

Error Handling and Monitoring - [ ] Try/catch error handling implemented on all critical flow actions - [ ] Failure alerts configured — Teams or email notification to solution owner - [ ] Failed record logging implemented — dead letter queue for manual review - [ ] Flow run history monitoring configured - [ ] Application Insights connected for production flows (recommended for enterprise)

Network Security (if applicable) - [ ] VNet integration assessed for regulated workloads — decision documented - [ ] If VNet integration used: dedicated subnet allocated, NSG rules confirmed with network team - [ ] If on-premises gateway used: installed on managed server, cluster configured, access controlled

Dataverse Web API (if applicable) - [ ] Application user created with least-privilege security role for Web API access - [ ] OData query patterns reviewed — $select, $filter, $top applied to avoid unfiltered queries - [ ] Batch endpoint used for high-volume write operations - [ ] API rate limits understood and designed for

Custom Actions / Custom APIs (if applicable) - [ ] Custom API vs Custom Action decision made — Custom API preferred for new development - [ ] Business logic documented — what it does, accepts, returns, who owns it - [ ] Security role designed for API caller — not open to all users - [ ] Dependent solutions documented — who calls this API and under what conditions - [ ] Sensitive data in transit identified and protected - [ ] Custom connectors security-reviewed and approved - [ ] Integration included in solution's Inspect review (SHIELD) - [ ] Data residency verified for cross-geography integrations

ETL and Data Movement (if applicable) - [ ] ETL pattern selected — Dataflows, Azure Data Factory, or Synapse Link — decision justified - [ ] Dataflows: scheduled refresh configured, transformation logic documented, row counts validated - [ ] Azure Data Factory: pipeline tested in non-production, throughput validated against volume requirements - [ ] Azure Synapse Link: analytics workload confirmed as the use case — not used as backup or operational access - [ ] Data migration: source data quality assessed, non-production validation completed, rollback plan defined - [ ] Migration validation: record counts and key fields verified post-migration - [ ] Migration history logged — what was migrated, when, by whom, from what source


Part of the DIALOGE Framework — powerplatform.wiki Last updated: March 2026 Last reviewed: March 2026