Skip to content

SHIELD — I: Inspect

Not every solution needs a full review. Every solution must be assessed.

TL;DR

Not every solution needs a full security review — but every solution must be assessed. Three modes: Safe Zone auto-pass (within pre-approved boundaries), Workload Pattern approval (review once, approve many), Full Security Review (new patterns, regulated data, high risk). Solution Checker is a mandatory automated gate. Inspect scales security without creating bottlenecks.

Applies To

Audience: Application Security Engineer · Platform Lead · Solution Engineer Character: Risk-based gate (fires at lifecycle points, not continuously) Frameworks: SHIELD · DIALOGE (Go-Live — deployment gate) · BOLT (Tier boundaries)


What Inspect Means in SHIELD

Inspect is SHIELD's most distinctive pillar — and the one most frequently misunderstood. It is not ongoing monitoring. It is not a bureaucratic approval process that slows down every maker. It is a risk-based security review model that applies to solutions at defined points in their lifecycle — calibrated to the actual risk profile of each solution.

The distinction matters. A governance framework that applies a full security review to every canvas app and every flow regardless of complexity or risk creates bottlenecks that makers work around — defeating the purpose entirely. A governance framework with no security review at all allows ungoverned solutions to accumulate in production — creating the compliance and security failures that audits uncover.

Inspect finds the middle path: a structured, risk-calibrated assessment that applies the right level of scrutiny to the right solutions at the right time.


Why Inspect Decisions Matter

Application security failures in Power Platform are rarely technical vulnerabilities. They are design decisions that were never reviewed:

  • A flow that reads sensitive HR data and writes it to a SharePoint list visible to the whole organisation — because nobody reviewed the data handling before deployment
  • A canvas app using a personal account connection to a financial system — because nobody reviewed the authentication pattern before go-live
  • A custom connector exposing an internal API without authentication — because it was "just for internal use" and never went through a security review
  • A solution passing unvalidated user input directly to a database query — because the maker was not aware of injection risks

None of these require sophisticated attack techniques to exploit. They are the consequence of solutions reaching production without a security lens applied to their design and implementation.

Inspect applies that lens — consistently, proportionately, and without creating unnecessary friction for low-risk solutions.


The Core Questions Inspect Answers

  • Does this solution stay within approved security boundaries?
  • Are the connectors it uses approved and appropriately governed?
  • Is sensitive data handled correctly throughout the solution?
  • Are connections using service accounts rather than personal credentials?
  • Has the solution passed automated quality and security checks?
  • Has a human reviewer assessed the security implications before production deployment?
  • Is this solution part of an approved workload pattern — or does it require individual review?

The Three Modes of Inspect

Inspect operates in three modes — selected based on the solution's risk profile. The mode determines how much scrutiny the solution receives before it can be deployed to production.


Mode 1 — Safe Zone: Auto-Pass

Solutions that remain within pre-approved boundaries pass Inspect automatically. No manual review is required. The Safe Zone is defined by the platform team — a set of conditions that, when all are met, indicate that the solution's security risk is within acceptable parameters.

Safe Zone criteria — a solution auto-passes when all of the following are true:

  • Uses only connectors from the approved connector catalogue (no custom connectors, no connectors outside the approved list)
  • Operates exclusively on data classified as Public or Internal — no Confidential or Regulated data
  • Built within an approved solution pattern (see Mode 2 below)
  • Deployed to an environment with Managed Environments enabled, monitoring configured, and DLP policy applied
  • No external integrations outside approved endpoints
  • No anonymous or unauthenticated access points
  • Solution Checker passes at the required severity threshold

The governance implication: The Safe Zone does not mean "unreviewed" — it means "reviewed at the policy level rather than the solution level." The platform team's design of the Safe Zone criteria is itself a security decision. The Safe Zone criteria should be reviewed annually and whenever the threat landscape or compliance requirements change.

Who defines the Safe Zone: The platform team — typically the CoE Lead or Security Architect — defines the Safe Zone criteria in collaboration with the CISO or security function. The criteria are published so makers understand exactly what keeps them within the Safe Zone, reducing the friction of building compliant solutions.


Mode 2 — Workload Pattern Approval: Review Once, Approve Many

A workload pattern is a defined class of solutions — sharing the same connector profile, data classification, architecture pattern, and deployment target — that has been security-reviewed as a pattern rather than as individual solutions.

Example workload patterns: - "Canvas apps using SharePoint and Teams connectors for internal team productivity — non-sensitive data, no external users" - "Power Automate flows processing invoice data from the finance system to Dataverse — Confidential classification, approved finance connector" - "Model-driven apps accessing the CRM Dataverse environment — sales team users, standard security roles"

How pattern approval works: 1. The platform team or a Solution Architect defines the pattern — documenting the connector profile, data classification, architecture, and security controls 2. The security function reviews and approves the pattern — documenting the approval, the conditions, and any restrictions 3. Individual solutions built within the approved pattern auto-pass Inspect — no individual review required 4. The pattern is reviewed annually and when material changes occur to the connectors, data, or architecture it covers

The enterprise value: Pattern approval is the mechanism that allows high-velocity, lower-risk solution development to proceed at scale without creating a review bottleneck. Once the "SharePoint and Teams for internal productivity" pattern is approved, every solution matching that pattern bypasses individual review — the security work was done once, at the pattern level.

Pattern boundaries: Solutions must stay strictly within the approved pattern to auto-pass. A solution that adds a new connector not covered by the pattern, accesses a more sensitive data classification, or introduces an architecture element not included in the pattern requires re-evaluation — either an update to the pattern definition (requiring re-approval) or elevation to Mode 3 (Full Security Review).

Annual re-evaluation: Approved patterns must be reviewed annually. The threat landscape evolves. New vulnerabilities emerge. Connector capabilities change. Regulatory requirements shift. A pattern approved eighteen months ago may no longer represent an acceptable risk profile under current conditions.


Mode 3 — Full Security Review: Individual Assessment

A Full Security Review is required for solutions that do not fit within the Safe Zone or an approved workload pattern. It is a structured, human-led assessment of the solution's security design, implementation, and deployment.

Triggers for Full Security Review:

Trigger Reason
New connector type not in the approved catalogue Unknown risk profile — connector capabilities and data handling not assessed
Solutions handling Confidential or Regulated data Higher stakes — data sensitivity demands individual assessment
High user-count solutions (typically >100 users) Scale amplifies the impact of any security failure
External-facing solutions (Power Pages, public APIs) External exposure introduces threat vectors not present for internal solutions
Solutions with custom connectors Custom connectors bypass standard DLP classification — require explicit review
Solutions with non-standard integrations or architectures Pattern approval does not apply — individual assessment required
Significant changes to previously approved solutions Material changes may take the solution outside its original approved scope
Solutions in regulated industries with specific compliance requirements Regulatory obligations may impose review requirements beyond standard SHIELD criteria

The Full Security Review process:

Step 1 — Submission: The maker or Solution Engineer submits the solution for review using the defined intake process — typically via the CoE Starter Kit's Developer Compliance Centre or a structured intake form. Submission should include: solution purpose, data classification, connector list, external integration endpoints, user population, and deployment target.

Step 2 — Automated gate: Before human review begins, the solution must pass the automated security gates: - Solution Checker passes at the Critical and High severity threshold - All connectors are listed and their DLP classification verified - Environment variables confirmed — no hardcoded credentials in solution components

Step 3 — Security review: The Application Security Engineer (or designated security reviewer) assesses:

Connector and integration security: - Are all connectors on the approved list or explicitly reviewed? - Custom connectors — what API do they connect to, what authentication is used, what data do they access? - External integration endpoints — are they HTTPS? Do they require authentication? Is the data residency acceptable?

Authentication and authorisation: - Are connections using service accounts or application users — not personal credentials? - Is the connection reference model used — no hardcoded connections? - Does the solution enforce appropriate access control at the data layer?

Data handling: - Is the data classification consistent with the solution's design? - Is sensitive data accessed, stored, or transmitted in ways that are appropriate for its classification? - Is personal data handled in accordance with privacy obligations? - Are there data flows that could result in sensitive data reaching unauthorised destinations?

Secure design and coding: - Power Fx and flow design — are there patterns that expose sensitive data unnecessarily? - Error handling — do error messages expose sensitive information to users? - Input validation — is user input validated before use in queries or actions?

Dependency assessment: - What external systems does this solution depend on? - Are those dependencies within the organisation's approved integration surface? - What happens to this solution if a dependency fails or is compromised?

Step 4 — Outcome and documentation: The review produces one of three outcomes:

  • Approved — the solution meets security requirements. The approval is documented with the reviewer's name, date, conditions (if any), and expiry (if applicable). The documentation is stored as part of the deployment record.
  • Approved with conditions — the solution is approved for deployment with specific remediation items that must be addressed within a defined timeframe. Minor issues that do not prevent go-live but require follow-up.
  • Not approved — the solution has security issues that must be remediated before deployment. Specific findings are documented with remediation guidance. The solution returns for re-review after remediation.

Step 5 — Ongoing validity: Inspect approval is not permanent. The following events trigger re-review: - Material changes to the solution — new connectors, new data sources, new external integrations - Annual re-evaluation for solutions in production - Changes to the threat landscape or compliance requirements that affect the solution's risk profile


Automated Security Gates

Before any manual Inspect review — and as the enforcement mechanism for Safe Zone auto-pass — automated security gates provide consistent, objective evaluation of solution quality and security.

Solution Checker

Solution Checker is an automated static analysis tool that evaluates solution components against a library of quality and security rules. It identifies deprecated API usage, performance anti-patterns, accessibility issues, and security vulnerabilities in canvas apps, flows, plugins, and web resources.

Severity levels: - Critical — must be resolved before deployment. Represents a serious security or reliability risk. - High — should be resolved before deployment. Represents a significant quality or security concern. - Medium — recommended for resolution. Represents a best practice or minor quality issue. - Informational — context for improvement. Does not represent a risk.

Enterprise requirement: Solution Checker must be configured as an automated gate in the deployment pipeline. Critical and High severity findings must block promotion to production. This enforcement cannot rely on manual execution — it must be automated to be reliable.

Manual Solution Checker execution before submission is good practice. Automated enforcement in the pipeline is the governance control.

DLP Policy Verification

Before deployment, all connectors used by the solution must be verified against the DLP policy of the target environment. A solution that uses connectors from different DLP buckets will be blocked by DLP enforcement at runtime — discovering this in pre-deployment review rather than in production is significantly less disruptive.

Managed Environments — Solution Checker Enforcement

Managed Environments can be configured to enforce Solution Checker compliance on solution import — blocking the import of solutions with Critical or High findings directly in the environment. This provides a second enforcement layer beyond the pipeline gate, catching solutions that might be deployed outside the standard pipeline process.


The Connector Approval Catalogue

The connector approval catalogue is the published list of connectors approved for use in each environment tier. It is the primary tool by which makers understand what the Safe Zone boundaries are — and by which the Inspect process evaluates connector usage.

Catalogue structure:

Connector Environment Tier DLP Bucket Notes
SharePoint All environments Business
Microsoft Teams All environments Business
Dataverse Dev, Test, Production Business
Azure Service Bus Production only Business Requires security review for custom topics
HTTP (generic) Dev only Non-Business Review required for Production
Custom Connectors Dev only by default Review required Full security review before any environment

The catalogue is maintained by the platform team and reviewed quarterly. New connectors are added after security assessment. Deprecated or high-risk connectors are removed or demoted.


Inspect and Go-Live

Inspect is the security gate within the Go-Live process — it must be completed before any solution is deployed to production. The relationship between SHIELD Inspect and DIALOGE Go-Live is explicit:

  • The Go-Live readiness checklist includes SHIELD Inspect completion as a mandatory gate
  • The Inspect approval is documented and stored as part of the deployment record
  • Pipelines should be configured to require Inspect documentation before production deployment is permitted
  • The Inspect outcome (Safe Zone, Pattern Approval, or Full Review) is recorded in the deployment audit trail

A production deployment without a corresponding Inspect record is a SHIELD Enforce violation — the evidence that the security review occurred must exist.


Maturity Levels

Level Description
Basic Solution Checker run before deployment. Connector list reviewed informally before go-live. Some awareness of what requires review.
Intermediate Safe Zone criteria defined and published. Connector approval catalogue maintained. Solution Checker automated as pipeline gate. Full Security Review process defined with documented outcomes. Pattern approval process in use for common solution types.
Advanced All three Inspect modes fully operational. Connector catalogue reviewed quarterly. Annual pattern re-evaluation scheduled. Inspect outcomes integrated into deployment pipeline as mandatory gate. Inspect documentation retained as audit evidence. Automated DLP verification in pipeline. Managed Environments enforcement active.

Safe Zone for Inspect

Inspect itself is the safe zone mechanism for SHIELD — it defines what does and does not require individual security review. The meta-question is: has the Inspect capability been implemented at the appropriate maturity level for the organisation's risk profile?

For regulated industries, financial services, healthcare, and government — the Inspect capability must be at Advanced maturity. The Full Security Review process must be defined, staffed, and enforced before any regulated workload goes to production.


Common Mistakes

  • Inspect as bureaucracy, not governance — a review process that exists to check a box rather than to assess genuine security risk. Reviews that always approve without meaningful scrutiny provide the appearance of security without the substance.
  • Solution Checker run manually and ignored — developers run Solution Checker, note the warnings, and deploy anyway. Without automated pipeline enforcement, Solution Checker is advisory, not binding.
  • No connector approval catalogue — makers do not know what connectors are approved, so they use whatever is available. The Inspect review becomes a discovery exercise rather than a verification exercise.
  • Pattern approval without annual review — patterns approved once and never re-evaluated. The threat landscape changes; the pattern may no longer represent acceptable risk two years after approval.
  • Full Security Review with no defined process — solutions requiring review are submitted informally and reviewed inconsistently. Different reviewers apply different standards. No documentation trail.
  • Inspect approval not recorded — solutions reviewed and approved verbally or via email. When an auditor asks for evidence that a security review occurred, none exists.
  • Safe Zone criteria too broad — the Safe Zone is defined so broadly that solutions with meaningful security risk auto-pass. The criteria must be genuinely risk-calibrated.
  • Safe Zone criteria too narrow — the Safe Zone is defined so narrowly that almost every solution requires a Full Security Review, creating a bottleneck that makers route around. Balance is essential.
  • Inspect applied only at initial deployment — solutions deployed, approved, and never reviewed again despite material changes. Re-review triggers must be defined and enforced.

Readiness Checklist

Safe Zone - [ ] Safe Zone criteria defined — specific, documented, and published to makers - [ ] Safe Zone criteria reviewed by security function and approved - [ ] Annual Safe Zone review scheduled - [ ] Makers know where to find the Safe Zone criteria

Connector Approval Catalogue - [ ] Connector catalogue created — all approved connectors listed per environment tier - [ ] DLP bucket assignment for each approved connector documented - [ ] Catalogue reviewed and updated quarterly - [ ] Process defined for requesting new connector approvals

Automated Gates - [ ] Solution Checker configured as automated pipeline gate - [ ] Critical and High severity findings block production deployment - [ ] Managed Environments Solution Checker enforcement enabled - [ ] DLP verification step included in deployment process

Workload Pattern Approval - [ ] Common solution patterns identified and submitted for approval - [ ] Pattern approval documented — connector profile, data classification, conditions, expiry - [ ] Annual pattern re-evaluation scheduled - [ ] Pattern boundaries clearly defined — makers understand what stays within and what falls outside

Full Security Review - [ ] Full Security Review triggers defined — conditions that require individual review - [ ] Review process documented — steps, reviewer responsibilities, outcome options - [ ] Intake mechanism defined — how solutions are submitted for review - [ ] Review outcomes documented and stored — approval, conditions, findings - [ ] Re-review triggers defined — material change, annual review, pattern change - [ ] Reviewer capacity assessed — enough Application Security Engineer time to meet review demand

Pipeline Integration - [ ] Inspect outcome recorded as part of deployment record - [ ] Production deployment gate includes Inspect documentation requirement - [ ] Audit trail of all Inspect outcomes maintained and accessible


Part of the SHIELD Framework — powerplatform.wiki Last updated: March 2026 Last reviewed: March 2026