Fine-grained authorization for AI agents with Cerbos and Aperture by Tailscale

Published by Alex Olivier on February 17, 2026
Fine-grained authorization for AI agents with Cerbos and Aperture by Tailscale

We're excited to announce our integration with Tailscale to bring fine-grained authorization to AI agents through Aperture by Tailscale.

Over the past few months, we've heard the same question from our enterprise customers with increasing frequency: how do I build a security model around AI agents?

Teams across their organizations are deploying coding agents (Claude Code, Codex, Gemini CLI), chat interfaces with tool access (Claude Desktop, Goose), and custom agents built on platforms like Azure AI Agent Service, Amazon Bedrock Agents, and Vertex AI Agent Builder. All of them are making tool calls into CRMs, databases, HR systems, and internal APIs. The agents are fast, capable, and operating with the same broad permissions as the humans they act on behalf of. As these agents begin to interact with other services and systems autonomously, the blast radius of a single unchecked action grows.

Static roles granted at the identity layer create blind spots at decision time, and those blind spots are exactly where risk accumulates.

We've been working closely with the Tailscale team to address this. Aperture gives organizations an AI gateway with full visibility into agent activity across their tailnet. What was missing was the authorization layer: the ability to make fine-grained, policy-driven decisions about what each agent is actually allowed to do at the moment a tool call is made. That is what Cerbos provides.

Together, Aperture intercepts tool calls at the infrastructure layer and Cerbos evaluates each one against authorization policies. The result is access control where risk actually lives: at the moment a tool call is made, with full context about who is making it and what they're trying to do.

 

Why authorization, not just observability

Visibility into what agents are doing is a necessary first step. But monitoring alone does not prevent an agent from querying customer records in a production database, pulling compensation data from an HR system, or updating deal stages in a CRM. These are actions that carry real regulatory and business consequences when performed without proper authorization.

Dashboards, alerts, and risk scores are reactive. An alert fires after a tool call has already been made. A risk score quantifies likelihood, not permission. Neither produces a binding decision that prevents an unauthorized action from executing.

Authorization is different. The identity industry is converging on a model where privilege is granted per action, not per session: continuous, contextual, and scoped to exactly what is being attempted at that moment. That is what Cerbos does for every tool call. A policy evaluates the full context of the request (identity, resource attributes, runtime signals) and returns an explicit allow or deny. The decision is auditable, traceable to a specific policy version, and reproducible.

This distinction matters when the requirement is not "know what agents are doing" but "ensure agents only do what they are permitted to do."

 

How the integration works

The architecture places Cerbos in the request path between the agent and the LLM:

User β†’ AI Agent β†’ Tailscale Aperture β†’ LLM
                        ↓         ↑
                   Cerbos PDP  Allow/Deny

When an agent makes a tool call, Aperture intercepts the request before it reaches the LLM. It extracts the user identity, tool name, and tool parameters and sends them to Cerbos. Cerbos enriches the request with real-time context, evaluates it against authorization policies, and returns an allow or deny decision. Aperture enforces that decision: permitted tool calls proceed to the LLM, denied tool calls are blocked before execution.

  • Aperture acts as the AI gateway and enforcement point. It sits between the agent and the LLM, forwarding tool call metadata to Cerbos and enforcing the result.
  • Cerbos PDP evaluates the tool call against authorization policies. Before making a decision, it enriches the request with context from external systems (identity providers, HR platforms, on-call systems) to build a complete picture of who is making the call and under what conditions.
  • Cerbos Hub manages the policy lifecycle: authoring, version control, testing, distribution, and audit logging. Policies are written declaratively, version-controlled, and distributed to PDPs automatically when updated. Every decision is logged back to Hub as audit evidence.

Cerbos is available as a managed service through Cerbos Hub, with an on-premise deployment option also available. Because Tailscale already knows who your users are and what devices they're on, Cerbos gets rich identity context out of the box. No separate identity provider integration is required.

 

What gets evaluated

Every tool call produces an authorization request containing:

  • Principal attributes: The identity of whoever or whatever initiated the tool call, sourced from Tailscale. This can be enriched at decision time with real-time context from external systems like Okta, Workday, ServiceNow, or PagerDuty: what department they belong to, what cost center they sit in, whether they're currently on call, what projects they're assigned to.
  • Resource attributes: The tool name and its parameters (the database being queried, the CRM object being accessed, the API endpoint being called), along with metadata about the model and provider.
  • Request context: Runtime signals such as time of day, device posture, and Tailscale app capabilities.

The policy engine evaluates all of these inputs together. A decision is not based on the tool name alone. It reflects who or what is making the call, what they're trying to do, and what the organization knows about both at that moment.

 

Policy-as-code

Cerbos policies are declarative, version-controlled, and testable. They are authored, reviewed, and deployed through the same workflows that govern application code, not configured through a UI. Every policy change has a version history, goes through review, and is validated automatically before deployment.

A policy defines rules. Each rule targets a set of actions (tool names), applies to specific roles, and specifies conditions under which the rule takes effect. Conditions can reference any attribute of the principal, the resource, or the request context.

Examples of what a single policy can express:

  • Read-only tools are always permitted. File reads, searches, and documentation lookups carry low risk and can be allowed unconditionally for all authenticated users.
  • CRM access is scoped by team and operation. An agent can read account metadata from Salesforce via an MCP tool, but only for users in the sales org. Updating deal stages or exporting contact lists requires a more privileged role. The policy evaluates the tool parameters to distinguish between read and write operations on the same integration.
  • Production database access is restricted. An agent calling a database tool against a production connection string is denied unless the user is currently on call, verified in real time against PagerDuty or a similar system. The same query against a staging environment is permitted for any engineer.
  • HR and compensation data require explicit authorization. Tools that access BambooHR, Workday, or any system containing employee PII are denied by default. Access is granted only to users in HR or People Ops, and only from managed devices, combining Tailscale app capabilities with organizational role.
  • Different rules apply to different clients. A tool call from a sanctioned coding agent (Claude Code) is permitted, while the same call from an unsanctioned client is denied. The user-agent is an input to the policy.
  • Organizational context narrows access. Policies can incorporate department, cost center, project assignment, or security clearance, retrieved at decision time from identity providers or HR systems, to scope what actions are permitted and by whom.

These rules compose. A single tool call can be evaluated against multiple conditions simultaneously, and the policy engine resolves the outcome deterministically.

 

Policy updates without redeployment

Authorization requirements change as organizations learn what their agents are doing and refine what they should be allowed to do. The policy update path matters.

When a policy is updated and pushed, Cerbos Hub compiles, validates, and tests the new version, then distributes it to all connected PDPs. The updated policy takes effect across the infrastructure within seconds, without restarting services, redeploying agents, or modifying application configuration.

In practice: an organization observes that agents are querying production customer data outside business hours via an MCP database tool. A policy author adds a time-of-day condition and a role constraint to the relevant rule and pushes the change. The constraint takes effect across every agent on the tailnet. No tickets, no deploys, no coordination with application teams.

This is the operational model. The security posture of the system changes by pushing a policy. The policy is the control surface.

 

Decision-level audit evidence

A common problem with existing access control systems is that audits show policy, not reality. IAM tools document what permissions were configured, but not what decisions were actually made at runtime.

Cerbos produces decision-level audit evidence. Every authorization decision creates a log entry in Cerbos Hub recording:

  • The principal: who or what triggered the tool call
  • The resource: which tool, with what parameters
  • The decision: allow or deny
  • The policy: which version, which rule matched, when that policy was last modified and by whom
  • The context: all attributes that were evaluated

This is a per-decision record that traces from a specific tool call to a specific policy evaluation. Compliance teams get evidence of what actually happened, not just what was intended. Security teams can reconstruct the exact sequence of authorization decisions for any agent session. Auditors can confirm that a denied action was denied by a specific rule in a specific policy version.

One source of authorization truth, with clear ownership and change control, across every agent in the organization.

 

Getting started

The Cerbos integration is available directly from the Aperture dashboard. The onboarding flow provisions a Cerbos Hub workspace, generates credentials for a policy decision point, and seeds a default policy that logs every tool call decision. From there, you deploy a Cerbos PDP instance within your tailnet and connect it to Hub.

The recommended path:

  1. Observe. Start with the default allow-all policy. All tool calls are evaluated and logged, but nothing is blocked. Use the audit data to build an evidence base of what agents are actually doing.
  2. Author policies. Write rules that reflect the organization's requirements. Start narrow: restrict a single tool or constrain a specific parameter, and expand coverage as confidence grows.
  3. Enforce. Push policies through Cerbos Hub. They take effect immediately across all connected PDPs. No agent modifications required.

Policies start permissive and tighten based on observed behavior. The audit log provides the evidence for informed policy decisions. This is a deliberate progression: understand first, then control.

As agent architectures grow more complex, with agents delegating to other agents and orchestrating multi-step workflows, the authorization model scales with them. Every action, regardless of what initiated it, is evaluated by the same policies.

 


To learn more about Cerbos authorization policies, visit cerbos.dev. For Aperture, see the Aperture documentation.

FAQ

What is per-call authorization for LLM tool calls?

Can authorization policies be updated without redeploying agents?

What risks arise when AI agents have broad permissions?

Book a free Policy Workshop to discuss your requirements and get your first policy written by the Cerbos team

What is Cerbos?

Cerbos is an end-to-end enterprise authorization software for Zero Trust environments and AI-powered systems. It enforces fine-grained, contextual, and continuous authorization across apps, APIs, AI agents, MCP servers, services, and workloads.

Cerbos consists of an open-source Policy Decision Point, Enforcement Point integrations, and a centrally managed Policy Administration Plane (Cerbos Hub) that coordinates unified policy-based authorization across your architecture. Enforce least privilege & maintain full visibility into access decisions with Cerbos authorization.