The Missing Link: Securing Agentic AI Interactions with Authentication & Authorization Standards

Large language models (LLMs) are highly effective at processing information independently, but they face challenges when tasks demand knowledge beyond their current training datasets.

For AI Agents to be truly useful to the users, they need timely access to relevant context, such as file stores, knowledge bases, real time data feeds, or enterprise datastores, and develop the ability to take actions like updating documents or writing emails. Nowadays, integrating AI with these sources has been a complex process. It often requires developers to write custom code or rely on specialized plugins for each data source or API, resulting in fragile, hard-to-scale systems. The current lack of adaptability and versatility in agents makes it difficult to streamline the integration of external functionalities. This limitation prevents agents from performing complex tasks across a wide range of data sources. For example, if you wanted an AI agent to access both a file server and a database, you would need to integrate both the file server’s API and a database driver. Each of these integrations has its own authentication, data format, and potential issues, and is closely tied to custom code, connectors, or plugins for a specific data source.

In a word, the current state is fragmented as many frameworks exist and standards are lacking, which leads to multiple challenges in particular for the security and compliance of handling authentication, access control, and real-time monitoring at scale.

Agentic AI Considerations:

  • Trust and Privacy Concerns: How can we establish trust in AI agents and ensure that they operate on behalf of THE human user? How can we safeguard the privacy of enterprise users and prevent the exposure of sensitive information, such as denying the AI inquiries to other employee’s salary details?
  • Authorization Granularity Concern: Can we move beyond coarse-grained permissions, such as OAuth scopes,  to provide the necessary fine-grained controls for agents (what, when, where, and how)?
  • Accountability Concern: How can we effectively audit agent actions and attribute them to the authorizing human user and agent, especially when issues arise?
  • Consent Control Concern: Could we establish mechanisms that allow users to dynamically provide and revoke consent, adapting permissions as agents operate?
  • Frictionless Experience Concern: How can we create a frictionless authentication and authorization process, especially for consumers? Ensure a balance between safety and frictionless user experience.

This white paper delves into a framework for secure delegation to AI agents, addressing the critical challenges of authorization, accountability, and access control in the rapidly expanding field of Agentic AI. By extending established internet authentication protocols (OAuth 2.0 and OpenID Connect) with AI-specific integrations, we can ensure compatibility with existing systems while enabling granular control over AI agent capabilities.

Recently, Anthropic introduced the Model Context Protocol (MCP), an open standard designed to connect AI agents with various data and tool sources. At its core, MCP has the potential to empower AI agents with greater autonomy, structured modularity, and reusable cognitive modules, enhancing both efficiency and trustworthiness. This enables AI to operate in a more human-like manner while also scaling seamlessly for multi-agent collaboration. This potential of MCP to transform the development of agentic AI systems has been recognized by major AI players.

Impersonation or Delegation?

Impersonation is an approach where an AI Agent assumes fully the user’s identity, making it indistinguishable from an actual user when performing actions. The AI agent logs in as a user, either using an impersonation token or the user credentials. The agent will have all the permissions that a user does, and would be treated exactly as a user would.
With delegation, the AI Agent is granted specific, limited permissions, to act on behalf of the user without assuming full user identity. The user explicitly grants permissions to an AI agent to perform certain specific actions within predefined boundaries, typically implemented using OAuth 2.0 delegated authorization, access token or fine-grained authorization.
While impersonation may be easier to implement for common scenarios, delegated access tokens offer superior control and security. With impersonation, there are significant security concerns around controlling access levels. Delegated access allows users to define the AI’s permissions precisely, aligning better with existing standards and offering greater future-proofing.

Which one to use?

The challenge with impersonation is that it’s difficult to control the level of access granted. While a claim can be included to indicate that an AI agent is using the token, and the token itself is specifically for the AI agent, impersonation risks bypassing that safeguard. This could lead to the AI accessing areas it shouldn’t.

With a delegation workflow, a user can define the AI’s access, limiting it to specific tasks. The AI can then work behind the scenes on those tasks only. Impersonation lacks this control, potentially requiring additional logic and safeguards – essentially reinventing what already exists in other workflows.

Delegation Token using the MCP pattern Example

So how would a simple example work when trying to incorporate both delegated tokens and an MCP pattern?

The MCP architecture consists of an MCP client, which can be seen as a native app to an authentication server, it interacts with LLM Provider and MCP server, a flow will look like:

  1. The user alsoƒ authenticates and receives a user token.
  2. The user’s query is sent to the client
  3. A token exchange flow can then be used to get an actor token for MCP client/native app.
  4. MCP Client utilizes the LLM provider and interacts with the MCP server.
  5. The MCP server then calls the API to the application server
  6. API or the Web App (registered with the authentication server) requests the Authentication server to validate the user ID token and actor token.
  7. The Web application API retrieves information based on the user’s authorization scope (assume only scope is used for now, but more advanced AuthZ can be used), then it returns the information back to the requesting MCP client that can call the LLM provider to respond to the user’s query.

Artificial intelligence, particularly in the form of sophisticated AI Agents, is rapidly transforming how we interact with technology. These agents promise unprecedented efficiency by accessing data and performing actions on our behalf. However, realizing this potential hinges on solving a critical challenge: securely granting AI agents the access they need without compromising user data, privacy, or enterprise security.

As highlighted, integrating these agents with the diverse systems they need to interact with (databases, APIs, file stores) is currently a fragmented, complex process lacking standardization.

Let us now focus on the authentication and authorization (AuthN/AuthZ) piece of this puzzle – how we verify the agent’s right to act and control what it can do, especially when acting for a user. We’ll explore the specific security and control problems this creates, examine current solutions using existing standards, and look towards emerging protocols designed to build a more secure and trustworthy AI ecosystem.

The Problem Deep Dive: Fragmentation, Control, and Trust

Integrating AI securely faces significant hurdles stemming from the lack of standardized approaches, directly impacting the key concerns outlined earlier:

  1. API Design & Integration Complexity: Most existing APIs weren’t built for autonomous AI agents needing delegated authority. Integrating requires bespoke solutions for each data source, leading to fragile systems that are hard to scale and secure consistently, directly impacting the goal of a Frictionless Experience.
  2. Maintaining User Control & Consent: Users must remain in control. How do we build systems where consent is clear, granular, dynamic, and easily revocable (Consent Control Concern)? Ad-hoc methods erode trust.
  3. Trust & Privacy Risks: Without clear standards for delegation, how do we ensure an agent acts only as intended by the user (Trust Concern)? In enterprise settings, preventing inadvertent access to sensitive data like salaries or confidential documents is paramount (Privacy Concern).
  4. Insufficient Authorization Granularity: Traditional permissions, like broad OAuth scopes (e.g., read_files), often lack the nuance needed for AI agents. We need finer control – perhaps allowing reading specific types of files but not others, or only performing actions within certain contexts (Authorization Granularity Concern).
  5. Accountability Gaps: If an AI agent performs an incorrect or malicious action, how do we trace it back? We need clear audit trails differentiating actions taken directly by the user versus those taken by an agent on their behalf, attributing responsibility correctly (Accountability Concern).
  6. Past Efforts & Stalled Standards: While protocols like OAuth 2.0 Token Exchange (RFC 8693) provide a foundation for exchanging tokens, potentially for delegation, they  haven’t been universally adopted or extended with specific profiles for the complex needs of AI agents.
  7. Divergent Needs – Enterprise vs. Consumer:
    • Enterprise: Focuses heavily on strict data governance, addressing Trust, Privacy, Granularity, and Accountability.
    • Consumer: Prioritizes a Frictionless Experience while still needing robust underlying security and Consent Control.

Current Solutions: Extending OAuth for Secure Delegation

In the absence of definitive AI-specific standards, several patterns have emerged, primarily leveraging and extending OAuth 2.0:

Table 1: Comparison of Token-Based Authentication Approaches for AI Agents

Approach

Description

Advantages

Disadvantages

Trust the AI

No specific authentication mechanism. Sharing credentials.

Simple to implement (initially).

High security risk, no accountability, potential for misuse.

Share User Access Token

Providing the AI agent with the user’s existing access token.

Allows the agent to act with the user’s full permissions.

High security risk if the agent is compromised, violates least privilege.

Impersonation Token (User)

AI acts on a user’s token with an identifier for the agent (sub=user, act=agent).

Provides auditability of agent actions, operates within user’s permissions.

Still relies on the user’s token being securely managed.

Delegation Token(Agent)

AI has its own token, acts on behalf of the user (“on_behalf_of” claim).

Better separation of concerns, potentially more granular control over agent permissions, enhanced auditability.

More complex implementation than simply sharing user tokens.

  • Worst Solution: Blind Trust & Brute Force: Relying on screen scraping or simulating user input is insecure, unreliable, and provides zero accountability. Avoid.
  • Bad Solution: Sharing the User’s Access Token: Giving the AI the user’s primary token grants excessive permissions and makes auditing impossible (violates Accountability and Granularity principles).
  • Workable Solution: Impersonation Token (AI as User): Using OAuth 2.0 Token Exchange (RFC 8693), the AI exchanges appropriate credentials for a new token that identifies the user (sub claim) but includes an indicator that an agent is acting (e.g., act claim). This offers some Accountability by showing delegation occurred. Several IAM vendors support variations (though this list is not exhaustive):

Better Solution: Token Delegation (AI on behalf of User): A more robust pattern where the AI obtains a token identifying itself (sub = AI_Agent_ID) while clearly indicating it acts on behalf of the user (e.g., on_behalf_of=user_ID or structured act claim). This provides superior Accountability (clear agent identity) and enables better Authorization Granularity (permissions can be tailored to the agent acting for that user). This approach best reflects a secure delegation model.

Did someone mention Authorization?

The intersection of JWT-based service authorization and agentic AI creates a unique category of complex problems.

The “easy” way to handle authorization is through scopes directly in the token. That actually does make it quite simple to validate the token, and see if they are allowed to do what they say they do:

  • From an API standpoint: This Token is issued by an Authorization Server (AS) I trust, the signature is good, the Audience is me, and I even recognize the impersonation claims!
  • Do the Scopes match the URI path policy? Yes? Let’s go!

But oftentimes it’s not so simple. Without devolving into a Discussion about Authorization Policies & scope creep, authorization policies gain a lot of flexibility and power when attribute and relationship based aspects of the user are taken into account.

Lets say you were trying to make a fine grained policy decision for an AI working on behalf of a human manager (the subject) at a company:

  • The AI should only be aware of employees and perform actions only on employees where the subject is listed as the manager
  • This AI might have further restrictions that a human might not have: 
    • Authorization to modify employee status, but NOT to delete.
    • Exporting of data that would have to be cleansed of PII.
    • Visibility to only a subset of user information.
    • etc…
  • All actions might furthermore be subject to additional approvals after the fact, especially depending on the context of what is being done! (though this is more of an Identity Governance problem than an authorization problem, but one leads to another!)

What is important to realize is: that modern and advanced Authorization tools are already powerful enough to deal with Agentic AI problems. However, they do need to have custom policies crafted in order to take into account the specifics of AI enabled work-loads, and enforce the perimeter of what the AI can and cannot do within a given API or service.

Emerging Protocols & Future Standards

While extending OAuth provides workable solutions now, the industry is developing next-generation standards better suited for AI:

  1. Model Context Protocol (MCP): Introduced by Anthropic in November 2024 (https://modelcontextprotocol.io/), MCP is an open standard specifically designed to structure how AI models securely connect to and receive context from external data sources and tools. This enables AI to operate in a more human-like manner while also scaling seamlessly for multi-agent collaboration. It aims to standardize the input to the AI, including authentication/authorization information potentially derived from tokens obtained via the patterns above. Its goal is to address the integration fragmentation and provide a common language, potentially improving Trust and enabling easier implementation of Granularity and Consent. (See Roadmap).
  2. GNAP (Grant Negotiation and Authorization Protocol): As RFC 9635, GNAP aims to modernize and simplify authorization, potentially offering more flexible ways to handle the complex delegation scenarios needed for AI agents, improving upon OAuth 2.0’s limitations.
  3. UCAN (User-Controlled Authorization Network): This specification (https://github.com/ucan-wg/spec) focuses on decentralized, user-controlled permissions using cryptographic capabilities (“capabilities-based security”). This directly addresses Consent Control and Granularity by allowing users to delegate very specific, verifiable permissions.

Conclusion & Recommendations

Securely integrating AI Agents requires moving beyond ad-hoc methods towards standardized, robust authentication and authorization frameworks. Addressing concerns around trust, privacy, granularity, accountability, consent, and user experience is paramount.

Our Recommendations

  1. Prioritize Secure Delegation Patterns: Adopt the “Better Solution” (Token Delegation via OAuth) where possible, or at minimum the “Workable Solution” (Token Impersonation), leveraging RFC 8693 and capabilities within your IAM platform. Avoid insecure shortcuts.
  2. Focus on Granularity and Auditability: Design systems that allow for fine-grained permissions specific to AI agent tasks and ensure all actions are logged with clear attribution to both the user and the agent.
  3. Implement Clear Consent Mechanisms: Ensure users have transparent and manageable ways to grant and revoke agent permissions.
  4. Explore Emerging Standards: Stay informed about MCP, GNAP, and UCAN. Participate in discussions and consider how these standards can solve deeper integration and authorization challenges. Practical exploration, such as building proofs-of-concept integrating MCP with robust AuthN/AuthZ patterns, will be key to understanding their real-world application.

Building a future where AI agents are powerful, helpful, and trustworthy requires a concerted effort focused on secure foundations. By extending proven standards today and embracing promising new protocols tomorrow, we can bridge the missing link in AI security.

Authors: Aditya Sharma, Boo Leong Khoo, Dominik Ludera, Grace Zhang, Wyatt Bourdeau, Nicolas Seigneur and Paul Figura