Spyglass MTG Blog

Securing Agents Created with Copilot and Copilot Studio

Written by Kevin Dillaway | Oct 22, 2025 6:57:09 PM

With the rise of being able to create high-value AI-powered agents using Microsoft Copilot and Copilot Studio, there is a corresponding rise in the need to make sure what we are doing in AI is secured and governed.  The Focus on the use of Agents is often all on how it is transforming organizations, automating workflows, enhancing productivity, and delivering intelligent services with little emphasis on the that they are also introducing new security challenges.  

As organizations embrace agent-driven automation, a robust security posture is non-negotiable. This blog post explores the critical security concerns associated with agent creation, including identity management, data quality and grounding, logging and auditing, Power Platform configurations, and mitigation of emerging AI threats. We also highlight the Microsoft technologies that help address these risks along with providing some actionable guidance for addressing them. 

Identity Management 

Assigning and managing identities for agents is foundational to providing a secure method controlling and monitoring each unique agent.  Every agent—whether a chatbot, automation workflow, or decision-making assistant—must have a distinct, traceable identity to prevent unauthorized access, monitor activities, and look for privilege escalation. Lack of Agent Identity security can result in agents performing unintended actions, leaking sensitive data, being over-permissioned, and/or being compromised. 

For any Agent that is created, the following best practices should be implemented: 

  • Unique identities assigned to each agent (Entra Agent IDs)
  • Avoid the use of shared credentials or service accounts when possible. 
  • If needed, use Managed Identities where the secrets are managed by something like Key Vault and those secrets are rotated regularly. 
  • Always follow the Principle of Least Privilege Access including for API grants and consents. 
  • Implementing strong authentication and authorization mechanisms for agent operations and interactions that are context based (Conditional Access for example). 
  • Make sure all agents are registered and provisioned in Entra. 

Data Grounding and Quality 

AI agents rely heavily on data sources that they are grounded on.  They use these for their context, decision-making, and user interactions. When that data is either insecure or has low-quality data (Data may be old, stale, irrelevant, and/or extraneous) the grounding can lead to erroneous outputs (Self-inflicted data poisoning), data leakage, or exposure of sensitive information. Agents must access only appropriate, high-quality, and compliant data.  Some of the key challenges revolve around: 

  • Validating the trustworthiness and accuracy of data sources. 
  • Preventing agents from accessing unauthorized or ungoverned datasets. 
  • Monitoring data lineage to track how data is sourced, transformed, and used by agents.

To help with this, Microsoft has several products that can help to add a layer of visibility and control to put the proper guardrails around grounding including: 

  • Microsoft Purview provides comprehensive 
  • DLP 
  • Retention 
  • Classification 
  • AI Interaction Monitoring and Control
  • Communication Compliance 
  • Compliance Assessment (including against AI policies) 
  • Auditing 
  • SharePoint Advanced Management 
  • Data Governance Reporting 
  • Site Search Restrictions 
  • Restricted Access 
  • Granular Permissioning 

Agent-to-Agent Connections 

As agent ecosystems evolve, agents may reference or communicate with other agents, sharing data or delegating tasks with each other. While this can enable complex workflows and potentially allow for the development of reusable agents for multiple use cases, it also introduces risks: 

  • Data leakage between agents with different access levels. 
  • Unintended privilege escalation if agents chain actions without proper validation. 
  • Difficulty in tracing actions when responsibilities are distributed across agents.

To reduce some of these risks, organizations must establish clear boundaries for agent interactions, enforce strict authentication and authorization for inter-agent communication, and maintain detailed logs of agent-to-agent activities.  Some of this will be implemented as part of the development of the agent itself (like logging) to make sure that the right details are being captured. 

Logging and Auditing 

Comprehensive logging and auditing are essential for detecting suspicious behavior, ensuring compliance, and supporting incident response. Without the proper logs, organizations may be blind to malicious actions, misconfigurations, the interaction or sharing of sensitive data, or policy violations by agents. 

As with the Data Grounding section, both Microsoft Purview and SharePoint Advanced Management offer integrated logging and audit trails for data access and modification.  Power Platform provides additional logging capabilities for flows, connectors, and agent interactions. It is recommended that all necessary logs be centralized or that custom alerting is configured to provide the proper level of information to the responsible teams. 

Power Platform Configurations 

Misconfigured Power Platform environments can expose organizations to significant risks, so Power Platform Administrators and Security teams must ensure that: 

  • Data Loss Prevention (DLP) policies are in place to restrict connector usage and prevent unauthorized data movement. 
  • Environment-level controls are enforced, segmenting development, testing, and production agents. 
  • Role-based access controls are configured to limit who can create, modify, or deploy agents. 

Additionally, Agents themselves should be scoped to just the users who should be interacting with them. 

Emerging AI Threats 

The use of AI agents have opened organizations up to new types of attacks including: 

  • Prompt Injection: This is where attackers manipulate agent inputs to change intended behavior or extract sensitive information. 
  • To mitigate, organizations should validate and sanitize all user inputs, implement input whitelisting, and restrict agent permissions for sensitive operations. 
  • Data Poisoning: Malicious actors corrupt training or grounding data, causing agents to produce harmful or inaccurate outputs. Protect against this by maintaining strict data governance policies (including removal of old or stale data) and monitoring data integrity. 
  • Model Abuse and Chaining Attacks: Agents referencing or invoking other agents can be abused if not properly controlled. Enforce authentication and authorization for all agent communications and monitor for anomalous agent behavior. 

Regularly update agent templates, review permissions, and conduct security assessments to stay ahead of evolving threats. 

Summary 

Securing agents created with Copilot, Copilot Studio, and Power Platform requires a multi-layered approach that combines identity management, data governance, secure configurations, and proactive threat mitigation. By leveraging Microsoft solutions such as Entra Agent IDs, Purview, SharePoint Advanced Management, DSPM for AI, and Power Platform DLP, organizations can address core security concerns and reduce risk exposure. IT professionals, security teams, and platform admins should: 

  • Assign unique, managed identities to all agents and enforce least privilege access. 
  • Govern data sources with Purview and SharePoint Advanced Management to ensure high-quality, compliant data grounding. 
  • Implement strict agent-to-agent referencing controls and monitor interactions. 
  • Centralize and review logging across all platforms for robust auditing and compliance. 
  • Configure Power Platform environments with DLP and RBAC to restrict agent operations and data flows. 
  • Stay vigilant against emerging AI threats by validating inputs, monitoring data, and updating security controls regularly. 

To learn more about how Spyglass can help you with your Copilot needs, contact us at info@spyglassmtg.com