Skip to content

SHIELD — D: Defend

Defend is where security operations lives — the ongoing heartbeat of the security function.

TL;DR

Prevention is necessary but not sufficient — detection capability defines maturity. Feed Power Platform activity logs to Microsoft Sentinel. Define incident severity levels and response runbooks. Conduct post-incident reviews that feed improvements back into security controls. Run quarterly security posture assessments. If you cannot detect anomalous access within minutes, your Defend maturity is Basic at best.

Applies To

Audience: SOC Analyst · Security Operations · CISO Character: Ongoing Frameworks: SHIELD


What Defend Means in SHIELD

Defend is the pillar where security shifts from design and prevention to detection and response. The other five SHIELD pillars build the security architecture — Defend operates it continuously, watching for signs that the architecture is under stress, responding when it is breached, and improving it based on what is learned.

Defend has two inseparable components:

Security Operations — the continuous monitoring, threat detection, and posture assessment activities that keep the security team informed of what is happening across the Power Platform estate in real time.

Incident Response — the structured process for detecting security events, containing their impact, investigating their cause, and recovering from them — with post-incident learning that feeds back into the security architecture.

These are not separate functions. Security Operations generates the signals that incident response acts on. Incident response outcomes inform the improvements to security operations. The two operate as a continuous loop — detection enables response, response generates learning, learning improves detection.


Why Defend Decisions Matter

The other SHIELD pillars create the conditions for security. Defend determines whether those conditions are actually holding.

DLP policies block connectors — but what happens when a new connector is added that bypasses the policy? Sight reviews access — but what happens when a compromised credential is used to access the platform between access reviews? Harden classifies data — but what happens when a flow extracts data and sends it somewhere unexpected?

These questions have the same answer: without Defend, nothing happens. The breach occurs, persists, and compounds — undetected until the damage is significant enough to surface through non-security channels.

Enterprise security frameworks recognise that prevention is necessary but not sufficient. Detection capability — the ability to know when something is wrong, quickly, with enough context to respond effectively — is the defining characteristic of a mature security posture.


The Core Questions Defend Answers

  • Do we have real-time visibility into security events across the Power Platform estate?
  • Can we detect anomalous behaviour — a user accessing data they have never accessed before, a flow sending data to an unusual destination — within minutes?
  • Do we have documented, tested runbooks for the most likely security incident scenarios?
  • Can we contain a security incident before it becomes a breach?
  • When an incident occurs, can we reconstruct exactly what happened and why?
  • Are post-incident learnings systematically applied to improve the security architecture?
  • Is our security posture regularly assessed — not just assumed to be holding?

Continuous Threat Monitoring

Continuous monitoring is the discipline of watching Power Platform activity for signals that indicate a security event — anomalous access patterns, unusual data movement, policy violations, and configuration changes that were not authorised.

Power Platform Activity Signals

Admin Center activity monitoring: The Power Platform Admin Center surfaces environment-level activity data — connector usage, flow runs, app access, and administrative actions. For the security operations team, the Admin Center is the first-level monitoring surface — providing visibility without requiring additional tooling.

Key signals to monitor in the Admin Center: - New environments created — is this expected? Does the environment have an owner? - DLP policy changes — who changed them and was this authorised? - New connectors appearing in environment usage — particularly custom connectors or connectors not in the approved catalogue - Unusual flow execution volumes — a flow that normally runs 100 times a day suddenly running 10,000 times - Admin role assignments — new users granted tenant-level or environment-level admin roles

Dataverse audit logs: Dataverse audit logs provide record-level visibility — who accessed what data, when, and what they did with it. For security monitoring, audit logs are the primary evidence source for data access anomalies:

  • A user accessing records they have never accessed in their history
  • Bulk record exports or queries returning unusually large data sets
  • Records being deleted at unusual volumes or by users without clear business reason
  • Access to sensitive fields (protected by column-level security) by users who should not have that access

Audit logs must be enabled before monitoring is possible — and they must be retained for long enough to support forensic investigation. Define retention periods aligned to regulatory requirements and incident response timelines.

Microsoft Sentinel integration: For organisations with a SIEM (Security Information and Event Management) platform, Microsoft Sentinel provides native integration with Power Platform activity logs — enabling correlation across the broader security estate.

Power Platform logs that can be ingested into Sentinel include: - Power Apps and Power Automate activity logs (via Diagnostic Settings to Log Analytics) - Dataverse audit logs - Admin Center activity via Microsoft 365 Management Activity API - Entra ID sign-in and audit logs for Power Platform access events

With Sentinel integration, Power Platform security events can be correlated with signals from other systems — detecting, for example, a user who accessed Power Platform immediately before accessing a file share and an email system in a pattern consistent with data exfiltration.

Microsoft Defender for Cloud Apps (CASB): Microsoft Defender for Cloud Apps provides cloud access security broker (CASB) capabilities for Power Platform — applying anomaly detection and policy enforcement to Power Platform activity at the cloud access layer.

Defender for Cloud Apps enables: - Anomaly detection — identifying unusual usage patterns compared to the user's historical baseline - Session controls — monitoring and controlling what users can do within Power Platform sessions - Data loss prevention at the cloud access layer — detecting and blocking data movements that bypass DLP policies - Threat intelligence integration — correlating Power Platform activity with known threat indicators

Application Insights for Security Telemetry

Application Insights, integrated with canvas apps and cloud flows for operational monitoring (covered in DIALOGE Operations), also serves as a security telemetry source:

  • Unusual error patterns that may indicate probing or exploitation attempts
  • Authentication failures at unusual rates
  • Access from unusual locations or device types
  • Performance anomalies that may indicate resource abuse

The security team should have read access to Application Insights workspaces for production Power Platform solutions — enabling security-relevant signal extraction alongside the operational signals that the solution team monitors.


Threat Detection — Knowing What to Look For

Continuous monitoring generates data. Threat detection requires knowing what patterns in that data indicate a security event.

Power Platform-Specific Threat Patterns

Data exfiltration indicators: - Large volume data queries returning significantly more records than the user's historical pattern - Flow runs that read from Dataverse and write to an external system — particularly connectors not commonly used by this user - Bulk record exports to SharePoint, OneDrive, or email — particularly outside business hours - New custom connectors pointing to external endpoints not in the approved catalogue

Credential compromise indicators: - Authentication from an unusual location or IP address for a known user - Authentication at an unusual time — 3 AM activity from an account that normally operates 9-5 - Multiple failed authentication attempts followed by a successful one - Simultaneous access from geographically improbable locations (user authenticating from London and Singapore within minutes)

Privilege escalation indicators: - Admin role assignments made without going through the standard process - Managed Environments configuration changes outside of planned maintenance windows - DLP policy modifications — particularly policies becoming more permissive - New application users or service principals created in production environments

Insider threat indicators: - A user accessing data outside their normal scope — a salesperson accessing HR records, a finance analyst accessing customer personal data - Unusual activity in the period before a user's scheduled departure from the organisation - Access to environments the user has not accessed before, particularly restricted environments

Alert Configuration

Not every security signal requires the same response. Alert configuration should be calibrated to ensure that high-priority signals receive immediate attention while lower-priority signals are captured for review without creating alert fatigue.

Alert severity tiers:

Alert Severity Response Time Target
Bulk data export by non-admin user P1 — Critical Immediate
DLP policy modified without change record P1 — Critical Immediate
Admin role granted outside standard process P2 — High Within 1 hour
New custom connector in production environment P2 — High Within 1 hour
Authentication from high-risk location P2 — High Within 1 hour
Unusual flow execution volume P3 — Medium Within 4 hours
Access review overdue P4 — Low Next business day

Incident Response

When a security event is detected — or suspected — the response must be structured, fast, and documented. Ad hoc responses to security incidents are slower, less effective, and produce less forensic evidence than rehearsed, runbook-driven responses.

Incident Severity Levels

Severity Definition Response Target
P1 — Critical Active breach, ongoing data exfiltration, or production systems completely unavailable due to security event. Potential regulatory notification requirement. Immediate — within 15 minutes of detection
P2 — High Significant security event with high probability of material impact. Compromised credentials, unauthorised admin access, confirmed policy violation with data access. Within 1 hour
P3 — Medium Security event with moderate impact or high uncertainty. Suspicious activity requiring investigation, policy violation without confirmed data access. Within 4 hours
P4 — Low Low-impact security event. Policy violation with no data access, configuration drift, non-urgent security finding. Next business day

The Incident Response Process

Step 1 — Detect: The incident is identified — by a monitoring alert, by a user report, by an external notification, or by a routine review finding. The detection source determines the initial evidence available and the initial response actions.

Step 2 — Triage: Assess the severity of the event using the severity criteria above. Assign an incident owner — the individual responsible for coordinating the response. Notify the appropriate stakeholders based on severity level.

Step 3 — Contain: Invoke the appropriate Lockdown actions to prevent further damage while investigation proceeds:

For a compromised user account: - Disable the Azure AD account immediately - Revoke all active sessions — use the Entra ID "Revoke all sessions" capability - Remove all Power Platform role assignments - Identify and disable all flows owned by the compromised account - Review recent activity from the account for evidence of data access or exfiltration

For a suspected data exfiltration via a flow: - Disable the specific flow immediately - If the flow's connector is implicated, block the connector via DLP policy modification - Preserve the flow run history before any changes are made — this is forensic evidence - Identify the data set that may have been exfiltrated

For an unauthorised admin action: - Reverse the admin action where possible (restore a DLP policy, remove an unauthorised role assignment) - Identify the account that performed the action and assess whether it is compromised - Preserve the Admin Center activity logs for forensic investigation

Step 4 — Investigate: Reconstruct what happened — the full sequence of events, the systems accessed, the data involved, and the entry point. Power Platform forensic investigation relies on:

  • Dataverse audit logs — record-level access history
  • Power Automate flow run history — what the flow accessed and when
  • Entra ID sign-in logs — authentication events for the accounts involved
  • Admin Center activity logs — administrative actions taken
  • Application Insights logs — application-level telemetry
  • Microsoft Sentinel correlation — cross-system event correlation

The investigation should produce a factual incident timeline: what happened, when, in what sequence, by which account, accessing which data.

Step 5 — Remediate: Address the root cause of the incident. Remediation is different from containment — containment stops the immediate damage, remediation prevents recurrence:

  • If a DLP policy gap allowed a connector to be used inappropriately — update the DLP policy
  • If a compromised credential was the entry point — assess and strengthen authentication controls
  • If excessive access permissions enabled the incident — review and reduce role assignments
  • If a custom connector provided an unauthorised data path — remove or restrict the connector

Step 6 — Recover: Restore normal operations after the incident is contained and remediated. For Power Platform, recovery typically involves:

  • Re-enabling suspended environments or flows after security controls are confirmed
  • Verifying data integrity — was data modified or deleted that needs restoration?
  • Re-activating disabled accounts after credential reset and MFA re-registration
  • Confirming that remediation actions have been applied and tested

Step 7 — Review: Conduct a post-incident review within 48 hours of incident resolution for P1 and P2 incidents, within one week for P3 incidents. The review produces:

  • Incident timeline — what happened
  • Root cause — what enabled the incident
  • Detection effectiveness — how long did it take to detect, and how was it detected?
  • Response effectiveness — did the response follow the runbook? What could have been faster?
  • Remediation actions — what changes were made
  • Prevention improvements — what SHIELD controls are updated to prevent recurrence

The post-incident review is the feedback loop from Defend to the rest of SHIELD — the mechanism by which operational incidents translate into architectural improvements.

Regulatory Notification

For organisations subject to data breach notification requirements (GDPR, CCPA, sector-specific regulations), the incident response process must include an assessment of notification obligations:

GDPR breach notification: Personal data breaches must be assessed for notification obligation within 72 hours of the organisation becoming aware of the breach. If the breach is likely to result in risk to the rights and freedoms of individuals, notification to the supervisory authority is required. If the risk is high, notification to the affected individuals is also required.

The incident response runbook should include a breach notification decision tree — assessed at Step 4 (Investigate) when the scope of data access is determined.


Forensic Investigation Capability

Effective incident investigation requires that the evidence exists and is accessible when needed. Building forensic capability into Power Platform is a design-time decision — attempting to create forensic capability after an incident is too late.

Forensic capability requirements:

Dataverse audit logging: Audit logging must be enabled at the environment level, the table level, and for the specific columns where sensitive data is held. Audit logs must be retained for a period that supports investigation of incidents that may not be detected immediately — 90 days minimum, longer for regulated environments.

Flow run history retention: The default 28-day retention for flow run history is insufficient for forensic purposes. For critical flows, implement custom logging — writing execution records to Dataverse or Azure Log Analytics — to maintain a longer forensic trail.

Log centralisation: Forensic investigation that requires accessing multiple separate log sources — Admin Center, Dataverse, Entra ID, Application Insights — is significantly slower than investigation using a centralised log repository. Microsoft Sentinel or Azure Log Analytics provides the centralisation layer. For organisations without SIEM capabilities, identifying where each log source is and how to access it should be documented before an incident, not discovered during one.

Evidence preservation: When an incident is detected, the first priority after containment is evidence preservation. Flow run history, Dataverse audit logs, and Admin Center activity logs should be exported and stored before any remediation actions that might overwrite or delete log data. Evidence preservation is documented as a step in every incident response runbook.


Security Operations — Continuous Posture Assessment

Beyond reactive incident response, Defend encompasses proactive security operations — the regular assessment of whether the security architecture is holding and where it can be strengthened.

Security Posture Assessments

A security posture assessment is a structured review of the Power Platform security configuration against SHIELD baselines and current best practices. Assessments should be conducted:

  • Quarterly — lightweight review against SHIELD baseline controls
  • Annually — comprehensive assessment covering all six SHIELD pillars
  • After significant platform changes — new environments, major solution deployments, compliance framework changes
  • After incidents — as part of the post-incident remediation process

Assessment outputs: - Current state against SHIELD baseline — what is configured, what is missing - Drift identification — controls that were in place at the last assessment but have changed - Remediation prioritisation — ranked by risk, with owners and timelines assigned

Vulnerability Scanning

Vulnerability scanning for Power Platform focuses on configuration assessment rather than technical vulnerability scanning:

  • Connector usage audit — are connectors in use that are not in the approved catalogue?
  • Permission sprawl review — have security role assignments expanded beyond intended scope?
  • Orphaned resource review — environments, apps, and flows without owners
  • DLP gap review — environments without DLP coverage
  • Authentication configuration review — accounts without MFA, missing Conditional Access policies
  • Managed Environments coverage — non-developer environments without Managed Environments enabled

Continuous Improvement

The output of security operations — monitoring findings, posture assessment results, incident learnings — feeds back into the security architecture through a continuous improvement process:

  • Monitoring signals that consistently indicate false positives are refined — reducing alert fatigue
  • Posture assessment findings are tracked as remediation backlog items — with owners and timelines
  • Incident learnings are converted into specific control improvements — not general observations
  • SHIELD baselines are updated when new capabilities become available or when the threat landscape changes

The continuous improvement cycle is what differentiates a security programme that matures over time from one that remains static. Power Platform evolves constantly — the security model must evolve with it.


Maturity Levels

Level Description
Basic Service Health notifications subscribed. Flow failure monitoring in place. A named security contact exists for Power Platform incidents. Basic incident response awareness.
Intermediate Admin Center monitoring active. Dataverse auditing enabled with defined retention. Incident severity levels defined. Incident response runbooks documented for most likely scenarios. Post-incident review process established. Quarterly security posture review conducted.
Advanced Microsoft Sentinel integration active — Power Platform logs centralised and correlated. Defender for Cloud Apps configured. Automated anomaly detection with alert rules. Full incident response capability — runbooks tested and staff trained. Forensic investigation capability built in. Post-incident review feeding into architecture improvements. Annual comprehensive posture assessment. Continuous improvement cycle active.

Safe Zone

Solutions with low criticality and limited data sensitivity can operate with Basic monitoring and informal incident response.

Any deployment that meets one or more of the following requires Intermediate or Advanced Defend maturity: - Processes Confidential or Regulated data - Is subject to data breach notification requirements - Is mission-critical — security incidents have material business impact - Has external users or customer-facing components - Is in a regulated industry with security monitoring obligations - Has undergone or is expected to undergo a security audit


Common Mistakes

  • No monitoring until after the first incident — the platform runs unmonitored until a security event surfaces through user complaints or external notification. Forensic evidence that would have supported investigation was never collected.
  • Audit logging not enabled — Dataverse auditing not configured before production go-live. When an incident occurs, there is no record of what data was accessed.
  • 28-day flow run history treated as sufficient — incidents that are not detected within 28 days have no flow-level forensic evidence. Custom logging is not in place.
  • No incident response runbooks — security events are responded to ad hoc. Response times are slow. Actions are inconsistent. Evidence is not preserved. Post-incident reviews are not conducted.
  • Alert fatigue from uncalibrated monitoring — too many alerts of insufficient specificity result in alert fatigue. High-priority signals are missed because they are buried in noise.
  • Post-incident reviews not conducted — incidents resolved, normal operations resumed, learnings not captured. The same vulnerability is exploited again months later.
  • Security operations siloed from platform operations — the security team and the platform operations team operate independently. Security events are detected by the security team but require platform knowledge to respond to effectively. Joint runbooks and joint training address this.
  • No regulatory notification process — an incident involving personal data occurs; the organisation does not know whether GDPR notification is required, who makes that decision, or what the 72-hour timeline means in practice.
  • Evidence preservation not the first step — containment actions overwrite or delete log data before it is preserved. The forensic trail is incomplete.

Readiness Checklist

Monitoring - [ ] Service Health notifications subscribed for Power Platform services - [ ] Admin Center activity monitoring reviewed regularly — frequency defined - [ ] Dataverse auditing enabled in all production environments — retention period defined - [ ] Flow run history retention strategy defined — custom logging implemented where needed - [ ] Microsoft Sentinel integration assessed — implemented where security maturity requires it - [ ] Defender for Cloud Apps configured for Power Platform (where applicable) - [ ] Application Insights security telemetry access granted to security team

Alert Configuration - [ ] Alert rules defined for high-priority security signals - [ ] Alert severity tiers defined — P1 through P4 - [ ] Alert routing configured — alerts reach the right people immediately - [ ] Alert fatigue review — alerts calibrated to reduce noise without missing signals

Incident Response - [ ] Incident severity levels defined for Power Platform security events - [ ] Incident response runbooks documented for most likely scenarios: - [ ] Compromised user account - [ ] Suspected data exfiltration - [ ] Unauthorised admin action - [ ] DLP policy breach - [ ] Ransomware or destructive action - [ ] Incident owner role defined — who coordinates the response - [ ] Escalation paths documented — internal security leadership, Microsoft Support, regulators - [ ] Regulatory notification decision tree documented — GDPR and applicable regulations - [ ] Runbooks reviewed and staff trained — not discovered during an incident

Forensic Capability - [ ] Log sources identified and access documented before an incident - [ ] Evidence preservation step in every incident runbook — export before remediation - [ ] Log centralisation assessed — Sentinel or Log Analytics where maturity supports it - [ ] Forensic investigation procedure documented — where to look for what evidence

Post-Incident Process - [ ] Post-incident review process defined — timing, participants, output format - [ ] Review outcomes tracked as backlog items — specific, assigned, time-bound - [ ] Improvement cycle active — learnings fed back into SHIELD architecture

Security Posture - [ ] Quarterly security posture review scheduled and conducted - [ ] Annual comprehensive SHIELD assessment scheduled - [ ] Vulnerability scanning process defined — configuration assessment scope documented - [ ] Continuous improvement backlog maintained — findings tracked to resolution


Part of the SHIELD Framework — powerplatform.wiki Last updated: March 2026 Last reviewed: March 2026