On March 16, 2026, Microsoft published something quiet but significant to its Purview blog: an open-source project called the DLM Diagnostics Model Context Protocol (MCP) Server. It does not change how retention policies are configured, how archive mailboxes work, or how inactive mailboxes are governed. What it changes is how administrators investigate when any of those things stop working — and it does so by letting an AI assistant run the diagnostic work for them.
For anyone who has spent time inside Microsoft Purview Data Lifecycle Management, the announcement lands with immediate relevance. Compliance teams have long relied on Purview DLM to enforce retention rules across Exchange Online, SharePoint, OneDrive, Teams, and other Microsoft 365 workloads. When something breaks — a retention policy that shows "Success" in the portal but quietly fails to delete content, an archive mailbox that refuses to auto-expand, an inactive mailbox that lingers after an employee departs — diagnosing the root cause has historically meant deep familiarity with PowerShell, knowledge of backend service internals, and a willingness to run a chain of 5 to 15 cmdlets in the right sequence before any coherent picture emerges. That problem is now the target of a tool built by Microsoft's own Purview Data Lifecycle Management team.
What Is the DLM Diagnostics MCP Server?
The DLM Diagnostics MCP Server is an open-source diagnostic tool released by Microsoft on March 16, 2026, authored by Rishabh Kumar, Victor Legat, and the Purview Data Lifecycle Management engineering team. The project is published on GitHub at github.com/microsoft/purview-dlm-mcp and installable as an npm package (@microsoft/purview-dlm-mcp). Its function is straightforward: it exposes Purview DLM diagnostic capabilities as tools that any compatible AI assistant can call, guided by natural-language input from an administrator.
Rather than requiring an administrator to know which cmdlets to run against Exchange Online or the Security and Compliance PowerShell session, the server accepts a plain description of a symptom — "archiving is not working for this user" or "retention policy shows Success but content isn't being deleted" — and guides the AI through a structured diagnostic investigation. The AI executes read-only PowerShell commands, evaluates outputs against known troubleshooting patterns, identifies likely root causes, and presents remediation recommendations as text for the administrator to review and act on. The key phrase is "presents as text." The server never executes a fix on its own.
A Model Context Protocol (MCP) server is a standardized backend that exposes tools, data, or capabilities to AI assistants. Think of it as a structured plugin: the AI asks the server to run a task, the server validates the request, executes it safely, and returns the result. The AI then interprets that result and continues the investigation. MCP was developed by Anthropic and open-sourced in November 2024 before being donated to the Linux Foundation's Agentic AI Foundation in December 2025.
Why Purview DLM Troubleshooting Has Always Been Hard
Microsoft Purview Data Lifecycle Management is the successor to what was previously called Microsoft Information Governance. Its core job is to ensure that organizations retain content they are legally or operationally required to keep, and delete content they are not permitted or not obligated to hold. According to Microsoft Learn documentation, retention policies are the cornerstone of DLM, and they apply across Exchange, SharePoint, OneDrive, Teams, and Viva Engage. Organizations can configure policies to retain content indefinitely, for a specific period tied to when users edit or delete it, or to automatically delete content after a defined period has elapsed.
In large enterprise environments, that sounds simple. In practice, it is not. Retention policy behavior depends on the interaction between multiple layers: the policy itself, any active holds or eDiscovery cases that may override it, mailbox archive configuration, workload-specific behavior differences, adaptive scope definitions, and the distribution status of the policy across the tenant. When an admin sees a retention policy marked as successfully deployed in the portal but items are not being deleted on schedule, any one of dozens of underlying conditions could be the cause. Diagnosing which one requires connecting to two separate PowerShell sessions — Exchange Online and the Security and Compliance Center — running a specific sequence of cmdlets like Get-RetentionCompliancePolicy, Get-AdaptiveScope, and Export-MailboxDiagnosticLogs, and then cross-referencing those outputs against known failure patterns documented in internal troubleshooting guides. That workflow, as the Microsoft team put it in their announcement, "requires deep familiarity with DLM internals and is largely manual."
User reviews on platforms such as Gartner Peer Insights echo this reality. Practitioners have noted that configuring and troubleshooting Purview can involve a steep learning curve, that content searches and scans can be slow in large environments, and that "incorrectly configured retention rules can lead to unintended data preservation or early deletion." The cost of getting it wrong is real: over-retained data increases legal discovery scope and storage costs, while under-retained data creates regulatory exposure.
“By combining structured troubleshooting guides with read-only PowerShell diagnostics and MCP, it significantly reduces the time and expertise required to diagnose complex DLM issues.” — Rishabh Kumar & Victor Legat, Microsoft Purview Data Lifecycle Management Team (March 16, 2026)
How the Server Works: Four Tools, One Safety Model
The DLM Diagnostics MCP Server exposes exactly four tools to the AI assistant. Each has a defined scope and purpose. Together they form a closed, auditable diagnostic loop.
Tool 1: Run Read-Only PowerShell Diagnostics
This is the core engine of the server. When an AI assistant needs to investigate a symptom, it calls this tool with the relevant PowerShell command. Before any command executes, the server validates it against a strict allowlist. Only read-only verb prefixes are permitted: Get-*, Test-*, and Export-*. Verbs that could alter state — Set-*, New-*, Remove-*, Enable-*, Invoke-*, and others — are blocked entirely. This is not advisory blocking; the command will not execute if it contains a disallowed verb. As Microsoft's announcement states explicitly, "every command is validated before execution."
The server connects to two PowerShell sessions: Exchange Online via Connect-ExchangeOnline, and the Security and Compliance Center via Connect-IPPSSession. Exchange cmdlets like Get-Mailbox, Get-MailboxStatistics, and Get-OrganizationConfig pull mailbox-level and tenant-level configuration data. Compliance cmdlets like Get-RetentionCompliancePolicy, Get-RetentionComplianceRule, Get-AdaptiveScope, and Get-ComplianceTag surface policy structure and distribution status. Together, these two sessions give the AI a complete picture of why a DLM configuration might be misbehaving.
Tool 2: Retrieve the Execution Log
Every command the AI runs through the server is logged with its timestamp, execution duration, status, and full output. Admins can call this tool at any point during or after a diagnostic session to retrieve a Markdown-formatted audit trail of the complete investigation. Microsoft's team specifically notes this is designed to be "easy to attach to incident records or compliance documentation." For organizations subject to regulatory oversight, the ability to produce a documented record of how a compliance problem was investigated — not just the conclusion — carries genuine operational value.
Tool 3: Microsoft Learn Documentation Lookup
Not every question an administrator asks maps to a diagnostic scenario. Some are conceptual or procedural: how do I create a retention policy, what is the difference between a retention label and a retention policy, how does adaptive scope work. For these, the server falls back to curated Microsoft Learn documentation. The lookup covers eleven Purview areas, including retention policies and labels, archive and inactive mailboxes, eDiscovery, audit, communication compliance, records management, and adaptive scopes. This keeps the AI grounded in official Microsoft documentation rather than generating speculative answers.
Tool 4: Create a GitHub Issue
If a diagnostic session encounters a scenario the server cannot handle — a cmdlet not on the allowlist that would be useful, a troubleshooting pattern not yet covered by the guides — the AI can use this tool to open a feature request directly in the project's GitHub repository. The issue is automatically populated with the session context: which commands ran, which ones failed, and what the administrator was attempting to diagnose. This closes the feedback loop between field use and product development in a structured way.
MCP: The Protocol That Makes This Possible
Understanding why this server works the way it does requires understanding the Model Context Protocol itself. MCP was announced by Anthropic in November 2024 as an open standard for connecting AI assistants to external data sources and tools. Anthropic's original announcement described the problem it solved: "Every new data source requires its own custom implementation, making truly connected systems difficult to scale." Before MCP, if a developer wanted to connect an AI assistant to five different enterprise systems, they needed five custom integrations, each with its own authentication logic, error handling, and data formatting. MCP defines a standard that reduces this to a one-time implementation on each side — the AI client implements MCP once, the server implements MCP once, and they connect.
The protocol took off quickly. OpenAI adopted MCP in March 2025. Google DeepMind confirmed support in April 2025. Microsoft joined MCP's steering committee at Build 2025. In December 2025, Anthropic donated MCP to the Agentic AI Foundation, a directed fund under the Linux Foundation co-founded by Anthropic, Block, and OpenAI, with support from Google, Microsoft, AWS, and Cloudflare. By that point the protocol had grown to over 5,800 server implementations and 97 million monthly SDK downloads according to industry tracking. The Boston Consulting Group described it as "a deceptively simple idea with outsized implications," noting that without MCP, integration complexity rises quadratically as AI agents multiply across an enterprise; with it, the growth is linear.
Microsoft's description of MCP in the DLM announcement frames it as a "USB port for AI": any MCP-compatible client connects to the server, the server exposes well-defined tools, and the AI uses those tools safely and deterministically. The DLM Diagnostics MCP Server is compatible with GitHub Copilot in Visual Studio Code, Claude Desktop, and any other MCP-compatible client. Configuration requires adding a JSON block to either the VS Code MCP config file (.vscode/mcp.json) or the Claude Desktop config file (claude_desktop_config.json), pointing at the @microsoft/purview-dlm-mcp package with environment variables for the admin UPN and tenant domain.
In a compliance environment, the configuration the AI is investigating is also the system of record for legal holds, regulatory retention requirements, and eDiscovery obligations. Giving an AI assistant write access to that environment introduces the risk of accidental policy modification, which could have real legal consequences. By restricting all execution to read-only cmdlets and presenting any remediation steps as administrator-reviewed text — never automated actions — the server preserves human control over the compliance estate while still doing the investigative heavy lifting.
Twelve Diagnostic Scenarios Covered Out of the Box
At launch the server ships with twelve troubleshooting reference guides. Each guide maps a symptom to a sequence of diagnostic checks and a set of possible remediation paths. The scenarios covered are:
- Retention policy shows Success in the portal, but content is not being retained or deleted as configured
- Policy distribution status shows Error or PolicySyncTimeout
- Items are not moving to an archive mailbox despite archiving being enabled
- Auto-expanding archive is not triggering when a mailbox exceeds 100 GB
- Inactive mailbox creation failures after employee departure
- SubstrateHolds and growth in Recoverable Items that is not clearing as expected
- Teams messages not deleting under a retention policy configured to do so
- Conflicts between legacy Messaging Records Management (MRM) policies and Purview retention policies
- Adaptive scope misconfiguration causing policies to apply to unintended users or sites
- Auto-apply label policies failing to classify and label content automatically
- SharePoint site deletion being blocked by an active retention policy
- Unified Audit Configuration validation — ensuring audit logging is correctly enabled and collecting the expected events
The MRM-versus-Purview conflict scenario is worth highlighting specifically. Organizations that have been on Microsoft 365 for years often have legacy MRM retention tags and policies in Exchange that were configured long before Purview DLM existed as a product. When a new Purview retention policy is applied alongside old MRM tags, behavior can become unpredictable. The principles of retention in Purview favor retention over deletion when policies conflict, but the interaction between old MRM policies and new Purview labels is a frequent source of unexpected behavior. Having a guided diagnostic path for exactly this scenario addresses a real operational pain point for long-tenured Microsoft 365 organizations.
Setup, Permissions, and Security Architecture
Installation requires Node.js 18 or later, PowerShell 7, the ExchangeOnlineManagement module at version 3.4 or above, and Exchange Online administrator permissions. Authentication uses MSAL interactive sign-in — no credentials are stored by the server, and each server instance runs in its own isolated PowerShell process. Microsoft provides a minimum-privilege configuration using Global Reader plus Compliance Administrator roles, which covers read access to both Exchange Online and the Security and Compliance PowerShell sessions. A single Organization Management role works as an alternative but is broader than necessary. Full Global Administrator access is noted as functional but explicitly described as "overly broad" and not recommended.
| Permission Option | Roles Required | Notes |
|---|---|---|
| Least-privilege (recommended) | Global Reader + Compliance Administrator | Covers both Exchange Online and Security & Compliance read access |
| Single role group | Organization Management | Covers both workloads but broader than necessary |
| Full admin | Global Administrator | Functional but not recommended — overly broad |
The security model layers these controls together: a read-only allowlist enforced at the command level, no stored credentials, session isolation per server instance, a full audit trail logged per session, and a hard prohibition on automated remediation. Microsoft frames this as ensuring "diagnostics are safe to run even in sensitive compliance environments." This is a deliberate design choice, not an incidental limitation. Purview DLM configurations underpin legal holds and regulatory records — environments where even read-only diagnostic activity should leave a traceable record.
MCP has broad and growing enterprise adoption, but security researchers flagged multiple concerns about the protocol in April 2025, including prompt injection risks, tool permission combinations that could facilitate data exfiltration, and lookalike tools that could silently replace trusted ones. The DLM Diagnostics MCP Server addresses some of these concerns through its strict read-only allowlist and command validation layer, but administrators should still apply standard MCP security hygiene: use least-privilege permissions, review audit logs regularly, and only connect to trusted MCP servers from verified sources.
What This Means for Compliance Teams
The broader significance of this release sits at the intersection of two trends that have been building in parallel. The first is the growing complexity of the Microsoft 365 compliance estate. As Microsoft has expanded Purview with new capabilities — priority cleanup for sensitive content deletion, retention by last-accessed date, adaptive protection integrations with Insider Risk Management, AI observability for Copilot and agent interactions — the surface area that compliance administrators are expected to understand and troubleshoot has grown substantially. The second trend is the arrival of MCP as a broadly adopted standard for connecting AI assistants to real enterprise systems, which creates the infrastructure needed to deliver AI-assisted tooling at the operational layer rather than just the documentation layer.
Prior to this tool, AI assistance for Purview DLM issues was largely limited to static documentation lookup and generative suggestions that the administrator then had to translate into actual PowerShell commands and interpret against their specific tenant configuration. The DLM Diagnostics MCP Server closes that gap by connecting the AI to live diagnostic data from the actual environment. An administrator no longer needs to know the specific sequence of cmdlets required to triage an archive mailbox failure. They describe the problem, and the AI runs the investigation, explains what it found, and hands back a recommended next step for human review and execution.
This model also has implications for how organizations think about compliance staffing and expertise requirements. Purview DLM troubleshooting has historically required either dedicated expertise or the willingness to learn a complex system under pressure during an active compliance incident. An AI-assisted diagnostic workflow does not eliminate the need for human judgment — the administrator still decides whether to implement a recommended remediation — but it does lower the floor of expertise needed to conduct a rigorous investigation. For smaller IT and compliance teams that manage Microsoft 365 environments without a dedicated Purview specialist, that is a meaningful operational change.
Looking at the DLM Diagnostics MCP Server in the context of Microsoft's broader Purview roadmap, the open-source and community-contribution model is notable. The GitHub issue-filing tool built into the server suggests Microsoft intends this as a living project rather than a static release — field use will generate feedback that feeds directly back into the diagnostic guide library, and the community can contribute new troubleshooting scenarios as they emerge. As Purview continues expanding to cover AI agent interactions, Microsoft Agent 365 sessions, and increasingly complex multi-policy scenarios, the diagnostic complexity will only grow. An extensible, community-maintained diagnostic framework built on MCP is a scalable approach to keeping pace with that growth.
Sources and References
- Victor Legat, Rishabh Kumar & Purview DLM Team (Microsoft) — “AI-Powered Troubleshooting for Microsoft Purview Data Lifecycle Management.” March 16, 2026: techcommunity.microsoft.com
- Microsoft — DLM Diagnostics MCP Server GitHub Repository: github.com/microsoft/purview-dlm-mcp
- Microsoft Learn — “Learn about Microsoft Purview Data Lifecycle Management.” Current documentation: learn.microsoft.com
- Microsoft Learn — “Learn about retention policies and retention labels.” Current documentation: learn.microsoft.com
- Anthropic — “Introducing the Model Context Protocol.” November 2024: anthropic.com
- Wikipedia — “Model Context Protocol.” Updated March 2026: en.wikipedia.org
- Pento AI — “A Year of MCP: From Internal Experiment to Industry Standard.” 2025: pento.ai
- Gartner Peer Insights — Microsoft Purview Data Lifecycle Management Reviews. 2026: gartner.com
- GBHackers — “Microsoft Launches AI-Driven Troubleshooting for Purview Data Lifecycle Tools.” March 17, 2026: gbhackers.com
- Nikki Chapple — “Microsoft Purview Records Management: A Practical Guide for AI-Ready Governance.” September 2025: nikkichapple.com
