Traditional IAM falls short because pre-defined identity governance controls like RBAC are too broad for autonomous AI agents, whose behavior can be manipulated in real-time.
In the 2025 CoPhish attack, threat actors created fake AI chatbots on Microsoft’s trusted Copilot Studio site and then sent phishing links to admin users on sites like LinkedIn.
- The fake links pointed to a real Microsoft domain such as copilotstudio.microsoft.com and promoted a Copilot demo or “new” productivity assistant.
- Because the links looked legit, victims clicked to complete a Microsoft OAuth consent flow.
- By doing so, they approved permissions for broad, long-lasting access to internal data.
- In the next step, they received a numeric code to “verify” their identity in the Copilot Studio agent.
- Once they verified, attackers used the code to get both access and refresh tokens from the Microsoft Entra ID token service.
- The tokens gave the attackers ongoing access to all connected Microsoft 365 apps, which also meant the AI agent could be steered into abusing this access.
If you’re doing business today, your employees and admins live in SaaS – Teams, Outlook, SharePoint, HubSpot, Salesforce, QuickBooks – all accessed via browser tabs.
And while regular users can’t approve broad OAuth scopes that give tenant-wide consent to resources, admins can. This is why attackers target them, as seen in the CoPhish attack.
Just one compromised admin login can enable access to all connected SaaS apps.
If you’re worried about your admins being targeted this way, Tech Radar recommends blocking Copilot Studio shared agents from outside your company.
More importantly, you can enforce conditional access and FIDO2 MFA for admin accounts.