MCP authorization: Securing Model Context Protocol servers with fine-grained access control

Published by Alex Olivier on July 16, 2025
MCP authorization: Securing Model Context Protocol servers with fine-grained access control

AI agents and large language models (LLMs) are rapidly evolving beyond simple question-answering. With the advent of the Model Context Protocol (MCP), these agents can now directly interact with external tools, databases, and APIs - essentially taking actions rather than just returning text.

This new capability unlocks powerful use cases, but it also introduces a critical challenge: how do we control “who can do what” on these MCP-connected tools? In other words, how do we enforce fine-grained authorization for AI agents operating via MCP? Ensuring proper MCP authorization is crucial to prevent unintended or malicious actions by AI agents and to protect sensitive data.

In this article, we’ll explain what MCP is and why securing MCP servers is so important. We’ll then explore the role of fine-grained authorization in mitigating MCP security risks (with real examples of what can go wrong), and how a solution like Cerbos can help implement dynamic, scalable access control for MCP servers.

What is the Model Context Protocol (MCP)?

Model Context Protocol (MCP) is an open standard, introduced by Anthropic in late 2024, that defines how AI agents, like chatbots or autonomous assistants, can connect to external data sources, tools, and services in a standardized way. Think of MCP as a specialized API for AI: instead of a typical API where one software system talks to another, MCP allows an AI agent or LLM to “talk” to databases, applications, and other resources on your behalf.

This means an AI agent can query your company database, invoke a cloud service, create a document, or perform other actions via an MCP server, all through a unified protocol.

MCP has gained rapid adoption in the AI industry. In fact, it was so useful that even OpenAI adopted it as a standard interface for tools, and within months there were already thousands of MCP servers available from various vendors to enable AI assistants to connect with different services. For example, companies like Asana, Linear, GitHub, Notion, and Atlassian released MCP servers to let AI agents interface with their products.

MCP effectively simplifies what used to be done with custom integrations or Retrieval-Augmented Generation (RAG) workflows - instead of converting data into prompts or vectors and injecting it into the LLM, an MCP server lets an agent directly fetch or manipulate data through a tool call. This standardized “USB-like” approach for AI integrations is incredibly powerful and is quickly becoming mainstream.

Gartner analysts predict that by 2026, more than 80% of independent software vendors will have embedded GenAI capabilities in their enterprise applications, up from less than 5% today - underscoring that MCP is on track to become a ubiquitous AI integration standard.

visuals for blogs - What is the Model Context Protocol (MCP).jpg

It’s easy to imagine the power here. An AI employee assistant could, for example, have an MCP connection to your HR system to look up employee info, your finance system to log expenses, and your DevOps tools to create tickets or deploy code. However, with great power comes great responsibility - and risk. Handing an AI agent the keys to various internal systems can be dangerous if not managed properly.

This is where MCP security and fine-grained authorization become critical.

Why MCP servers need strong security and authorization

Exposing internal tools or sensitive operations through an MCP server is not without risk. By design, MCP makes it easier for an AI agent to perform actions in your environment, some of which could be high impact, like modifying a database, initiating financial transactions, or controlling system settings. If an AI agent or an unauthorized user can invoke the wrong tool without checks, the consequences could be severe.

In practice, we’ve already seen early security incidents with MCP integrations.

visuals for blogs - MCP servers incidents - asana, atlassian, supabase.jpg

For instance, when Asana launched an MCP server for its Work Graph data, within a month security researchers discovered a bug that allowed users to access other users’ data - essentially a data leakage vulnerability. Around the same time, Atlassian’s MCP server was found to have a flaw that allowed attackers to submit malicious inputs, like forged support tickets and gain privileged access they shouldn’t have. Most recently, a Supabase MCP-related incident surfaced as well.

The MCP-related risks were taken so seriously that OWASP even launched an “MCP Top 10” security project to track common MCP vulnerabilities. In short, a poorly secured MCP server can become a new attack vector in your infrastructure.

Beyond specific bugs, the general attack surface expands with MCP. Each tool an MCP server exposes is a potential avenue for misuse if not tightly controlled. Normal API security measures such as authentication, rate limiting, etc., still apply, but there’s a new twist: AI agents might be acting on behalf of human users with varying permissions. This means we need to ensure the agent can only do what that user is allowed to do - or more securely, an attenuated set of permissions for the specific use case, derived from the original user who called the agent..

Imagine an AI assistant that has access to a finance database: a regular employee’s AI assistant should perhaps read records but not create payments, whereas a manager’s assistant might initiate a purchase order but only up to a certain amount. If the MCP server doesn’t enforce these distinctions, the AI agent could overstep its authority. Even if the agent isn’t malicious, it might accidentally perform an action simply because it lacks context or constraints. Effective authorization is the safety net that prevents accidents or abuses.

Another point to consider is auditability and compliance. When AI agents are making tool calls, you need a clear log of “who (or which agent) did what, and was it allowed.” This is not just for security forensics but also for compliance with regulations - imagine audit requirements for financial transactions or data access. A robust authorization system will log every decision. In fact, the Cerbos platform automatically keeps detailed decision logs for each authorization check, recording who tried to access what resource, which policy applied, and whether the action was allowed or denied. Such logs are vital for proving that your AI agent usage is under control and for investigating any anomalies.

To sum it up, MCP servers need strong, fine-grained security controls because they bridge powerful capabilities into the hands of AI agents. Without proper guardrails, you risk data breaches, unauthorized transactions, and a loss of control over your systems. Given how quickly MCP is being adopted and its paradigm shift in how applications integrate with AI, establishing robust authorization from day one is essential. The question becomes: how do we implement fine-grained, dynamic authorization for MCP servers in a maintainable way?

Securing MCP servers with fine-grained access control. The challenge of “Who can do what”

The core of authorization is determining, for each action, should this agent (or user) be allowed to do this? In traditional applications, this often boils down to checking a user’s role or permissions before executing an operation. With MCP and AI agents, we have a similar need, but it can get more complex.

The AI agent might be carrying a user’s identity or might be a non-human service account. We might have multiple roles, hierarchical permissions, or context-based rules that determine access. Implementing this logic directly in the MCP server code, with a bunch of if/else checks for roles, etc. is error-prone and inflexible. Hardcoding such rules leads to brittle code - any policy change means modifying the code and redeploying the server, which is not agile or scalable.

Let’s illustrate the complexity with a simple scenario. Suppose we have an AI agent that manages expense reports via an MCP server , similar to the example above. We might have requirements like: a normal employee can submit a new expense, but cannot approve expenses; a manager can approve expenses, but only those submitted by users on their team; and only an admin role can delete an expense entry. This is a classic multi-role, multi-action policy. If we tried to enforce this in code, we’d be sprinkling role checks around each tool’s implementation. It’s doable for one or two rules, but as soon as the requirements evolve, say we introduce a new role, or add a condition like “managers can only approve up to $1000” - we have to revisit the code. It’s easy to make mistakes or overlook a check. Indeed, permissions are often complex and context-dependent - a truth that only grows with more users and more varied actions.

For example, in our expense scenario: the “add expense” tool should be available to regular users, but the “approve expense” tool should be available only to managers and maybe also to admins. The “delete expense” tool might be so sensitive that it’s reserved for admins alone. On top of that, even managers who can approve might have an additional constraint - e.g. they can approve expenses for their own department or below a certain amount threshold. Implementing such fine-grained permissions requires a flexible approach. A static role-based scheme (RBAC) might not capture the nuance like the monetary limit, whereas an attribute-based approach (ABAC) can. This is why modern authorization models often use a combination of roles and attributes (contextual info) to make decisions.

The challenge for MCP servers is clear: we need a way to externalize and manage these “who can do what” rules without burying them deep in application code. We want to be able to express rules like “managers can approve expenses under $1000” or “AI agents with the ‘finance-reader’ role can access the finance database read-only tool, but not the write tool” in a clear, declarative way. And ideally, we want to change those rules or add new ones without editing the core MCP server logic every time. This calls for a policy-based authorization system that is decoupled from the main application.

Implementing dynamic authorization for MCP servers

Cerbos is a purpose-built solution to the kind of authorization challenges we’ve outlined. It externalizes your access control logic into human-readable policy files. In simpler terms, Cerbos lets you define your authorization rules in one place as YAML policies, and then ask Cerbos at runtime whether a given action should be allowed. Your application, in this case - the MCP server, delegates the decision to Cerbos. This architecture brings a few major advantages, which are outlined in the table below.

Capability Description
Centralized, externalized authorization logic Instead of scattering permission checks throughout your code, you write them as declarative policies.

For example, you might have a policy that says resource = “mcp::expenses” with rules: managers can “approve_expense” if amount < $1000, admins can “delete_expense”, etc. This policy lives outside your application code.

The MCP server simply queries Cerbos whenever an agent tries to use a tool. Cerbos evaluates the relevant policy and returns “allow” or “deny.”

Because of this decoupling, you can update the policy, say, add a new role or tighten a condition, without changing the MCP server code at all. This makes your system much more adaptable to changing requirements.
Fine-grained decisions in milliseconds Cerbos is designed to be fast. A policy decision usually takes a few milliseconds, meaning it won’t introduce noticeable latency in your MCP agent’s interactions. You can comfortably check authorization on each tool invocation or request.

Cerbos supports role-based access control, attribute-based access control and policy-based access control out of the box. This means your policies can consider not just the role of the agent or user, but also attributes of the resource or environment.

For instance, you can include the department of a user, the value of a transaction, or the time of day as part of the decision logic.

In our earlier example, implementing “managers can only approve expenses under a certain amount” is straightforward: you might pass the expense amount as an attribute in the authorization request, and write a policy rule that allows the action only if amount < 1000 (for managers).

Such conditions can be as simple or complex as needed, giving you true fine-grained control beyond simple yes/no role checks.
Dynamic enablement of tools When using Cerbos with MCP, a common pattern is to enable or disable tools dynamically based on the authorization check.

The MCP server can be configured to list all possible tools it can provide, but after a user (or agent) identity is associated, you run an authorization query for that identity against all the tools.

The tools that come back with “allow” are enabled; the rest are disabled for that session. This way, you don’t have to maintain separate static lists of tools per role – it’s all driven by the policy.

One user might see a richer set of available actions than another, and this is determined on the fly by Cerbos. For example, if an AI assistant is acting for an admin user, Cerbos might allow all actions (including the dangerous ones like delete_expense or superpower_tool), whereas for a regular user, Cerbos might only allow list_expenses and add_expense and explicitly deny the rest.

The MCP server simply reflects those decisions by not exposing or executing the unauthorized tools. This principle of least privilege is enforced by policy, not by hardcoding logic in each tool handler.
Audit and transparency Cerbos comes with comprehensive audit logging for every decision made. Each time the MCP server asks “Can agent X do Y on Z?”, Cerbos will log the details of that query and its outcome.

These decision logs record who the principal was (and their roles/attributes), what action and resource were involved, and what the result was.

Having a centralized log of all authorization decisions across your system is incredibly useful. It means that if there’s ever a question like “How did this AI agent get access to this data?” you can trace it in the logs.

You can also use the logs proactively to review access patterns or feed into compliance reports. In an enterprise setting, this level of transparency is a big win for security audits – something that can otherwise be very hard to achieve when authorization logic is buried in code across many services.
Ecosystem and integrations Cerbos can be run anywhere (on-prem, cloud, at the edge) and its behavior can be inspected. It also has SDKs for multiple languages and integrates well with modern cloud-native stacks. This makes adding Cerbos to your MCP setup relatively straightforward.

There’s no heavy proprietary lock-in; you define policies and you have a lightweight Cerbos container or service to evaluate them.

Many organizations use Cerbos as a drop-in authorization layer for microservices and APIs, and the same concept extends to MCP servers.

 

Now, let’s go into how you would go about securing an MCP server with Cerbos in practice. The process involves a few clear steps:

visuals for blogs - How to secure an MCP server with authorization.jpg

1. Define your authorization policies

Start by writing down the rules of who can do what in a Cerbos policy file.

For MCP servers, you’ll typically have a resource kind representing the MCP domain. For example, mcp::expenses for an expense-tracking MCP server. Then list out the actions / tools and which roles are allowed for each. In YAML, a snippet might look like: “admins: can approve_expense, reject_expense, delete_expense (basically everything); managers: can approve_expense (with some condition perhaps) and not delete; users: can add_expense and maybe list_expenses”. You capture all that in the policy.

If you have attribute-based conditions, like the amount threshold or team matching, you include those conditions in the policy as well. This step is done once, and you can iterate on the policy as needed.

The result is a set of policy files that fully describe your MCP server’s authorization model in a human-readable form.

2. Deploy the Cerbos Policy Decision Point (PDP)

Next, run the Cerbos service with your policies loaded.

This could be as simple as running the official Cerbos Docker image and mounting the directory where your policy files reside.

Cerbos will start up and serve an API, typically gRPC or HTTP, that your application can query for decisions. The MCP server will connect to this PDP.

Because Cerbos is stateless (it doesn’t require its own database) - it’s easy to run and scale. You can run it locally for development and then have it as a microservice in production. Many choose to containerize it or even embed it if using the WebAssembly version of the PDP.

The key is that at runtime, Cerbos is ready to answer authorization questions based on the policies you wrote.

3. Integrate authorization checks into your MCP server

With Cerbos up and your policies in place, modify your MCP server code to call Cerbos at the points where tool access needs to be decided.

Concretely, when an MCP client connects, perhaps on a new session or request, you take the attached identity (be it human or an agent working on behalf of a user) and ask Cerbos which actions are allowed. Cerbos has an API method (e.g. checkResource) where you pass the principal ( identity with roles/attributes), the resource (like “mcp::expenses” plus any resource-specific data like amount), and the list of actions you want to check. You can batch multiple actions in one call.

Cerbos will return which actions are permitted. Your code then simply enables or disables the corresponding tools accordingly.

In a Node.js example (using the Model Context Protocol SDK and Cerbos gRPC client), this might look like enabling each server.tool(...) if allowed or disabling it if not, then informing the client about the updated tool list. The handler of the tool can then use Cerbos also for the next level of granularity of authorization checks - giving the system defense in depth.

This integration step is usually just a few lines of code around where you define your tools or handle an agent session. The heavy lifting (decision logic) is all in the Cerbos side.

4. Test and iterate

Once integrated, you should test different scenarios to ensure the policies behave as expected.

For example, try your MCP server with a user in the “user” role versus “manager” role versus “admin” role and verify that the available tools match your policy - e.g., the manager can approve but not delete, etc. If something is off, you can adjust the policy and reload it into Cerbos, as Cerbos supports live reload of policies.

This iterative approach is much easier than changing code because you’re adjusting declarative rules. Teams can even use the Cerbos testing features or the online Cerbos Playground to simulate authorization queries and see which policy rules apply.

Done - MCP server is secured

After these steps, you’ll have an MCP server that is secured by Cerbos. The AI agents connecting to it will only be able to invoke the tools that they are allowed to, according to your centralized policy.

If someone tries to push the boundaries, say an agent with a low-level role attempts an admin-only action - Cerbos will deny the request and your MCP server will simply not perform that action. The decision and its denial will be logged for your records.

From a maintenance perspective, whenever you need to update permissions, maybe introduce a new “super-manager” role or deprecate a certain action, you can do so in the policy layer without touching the MCP server code, reducing the chance of bugs and speeding up iteration.

If you’d like more details on the process of using Cerbos to secure MCP servers, please read this guide.

Conclusion

Enabling AI agents to interact with your systems via MCP opens up exciting possibilities, from automation of routine tasks to entirely new AI-driven features. But with that comes the responsibility to ensure those AI agents operate within the bounds you set. MCP authorization is all about imposing the right limits: it’s the fine-grained control over which agents (or users behind them) can access which tools and data, under which conditions. We’ve seen that without these controls, organizations can quickly run into security incidents and data breaches. The good news is that we don’t have to reinvent the wheel to secure MCP servers. By adopting an externalized authorization approach using tools like Cerbos, we get a robust solution that is both developer-friendly and enterprise-grade.

If you’re keen to learn more or to see a concrete example, have a look at our demo on dynamic authorization for AI agents in MCP servers. If you are interested in implementing authorization for MCP servers - try out Cerbos Hub or book a call with a Cerbos engineer to see how our solution can help you safely expose tools to agents without compromising control, reliability, or auditability.

FAQ

What is MCP authorization and why does it matter?

How is authorization different from authentication in MCP?

What are common security risks if MCP authorization is not implemented?

Why is Cerbos recommended for securing MCP servers?

Book a free Policy Workshop to discuss your requirements and get your first policy written by the Cerbos team