Authorization policies: How to write, test, and validate them (faster with AI)
Writing authorization policies is one of those tasks that looks small on paper and turns into a week of work in practice. You know what access control you want. Editors can update posts in their own department. Admins can do most things but can't delete customer records. Viewers can read everything except salary data. It's obvious in a meeting. It's painful in a text editor at 4pm on a Friday.
The hard part isn't the policy language. It's the translation. You're moving from messy business requirements, often delivered verbally, into a precise specification of who can do what, under which conditions, on which resources. Get it wrong and you either ship a security hole or you ship an overly strict policy that makes support tickets explode.
This guide walks through how enterprise teams actually write authorization policies today. What good looks like, the mistakes that cost teams weeks of rework, and a newer workflow where an AI coding agent handles the bulk of the drafting and you handle the judgment calls.
Why writing authorization policies gets harder at enterprise scale
At five users and three resource types you can hold the whole model in your head. At five hundred users, dozens of resource types, a handful of tenants, and a compliance team breathing down your neck, you cannot. The failure mode is usually the same. Someone hardcodes authorization logic inside a service. A year later, three more services have copied that pattern. Every permission change now requires a code deploy across multiple repos, and nobody can answer "who can access this?" without reading source code.
That's the problem we keep seeing. In our analysis of authorization failure patterns, the recurring theme is that authorization decisions drift away from a single source of truth. Facebook, Okta, and Microsoft have all shipped incidents where permission logic had silently diverged from intent. The fix in each case wasn't a bigger role system. It was externalising the logic so you could see it, test it, and change it in one place.
The 2024 IBM breach report puts the global average cost of a breach at $4.88 million, the highest on record. A surprising share of those incidents trace back to misconfigured access, not zero-day exploits. That's why the shift toward externalized authorization has accelerated. Policies live in one place, they're version controlled, they're testable. The question then becomes how you actually write them.
What a good authorization policy looks like
Strip away the tooling specifics and every authorization policy answers the same question. Given a principal (who is asking), an action (what they want to do), and a resource (what they're doing it to), should this be allowed?
A healthy policy has four things going for it.
It uses attributes, not just roles. Pure role-based access control breaks down fast at scale because you end up creating roles like us_east_support_tier_2_read_only_weekend. Real systems need attribute-based conditions layered on top of roles, where rules evaluate things like department, region, resource owner, or time of day.
It's deny-by-default. Every rule is an explicit allow under clear conditions. Anything not matched is denied. This is how you keep least-privilege honest. OWASP's guidance on broken access control makes the same point. If your system's default is permissive, one missing rule creates a data breach.
It's scoped tightly. Derived roles are narrow, not kitchen-sink. Conditions reference specific attributes, not wildcards. Actions are enumerated, not lumped into a single *. You can always loosen a tight policy. You cannot safely tighten a loose one once it's live.
It's tested. Every allow path and every deny path has a test case. Policies are code, and code without tests rots.
Our policies combine YAML for structure and CEL for conditions. YAML keeps it readable for non-engineers. CEL keeps it expressive enough for real enterprise rules. A condition that says "editors can update a post if it's in their department and the post isn't archived" is a few lines, not a few hundred.
The five steps most teams follow when writing policies
There's no magic to it. Teams that write good policies at scale follow roughly the same sequence.
Step one, gather the requirements properly. The single biggest mistake is starting to write YAML before you've mapped the authorization matrix. List every resource type. List every action on each. List every role. Then fill in the cells. This authorization matrix approach forces you to ask the questions nobody wants to ask. Can support delete an expense or only void it? Can a viewer see salary fields? Vague answers here become security holes later.
Step two, pick your model. Most teams start with RBAC and layer ABAC on top. Some add policy-based access control for multi-tenant or regulated use cases. A few need relationship-based access control for document or graph-heavy systems. Pick the smallest model that fits. You can always extend.
Step three, structure your files. Separate policies by resource type. One file for post, one for expense, one for report. Shared logic goes into derived roles or exported variables. This keeps each file reviewable on its own and keeps diffs clean when business requirements shift.
Step four, write the conditions. This is where CEL earns its keep. Instead of proliferating roles, you write expressions like R.attr.ownerId == P.id or R.attr.status == "DRAFT" && P.attr.department == R.attr.department. A handful of well-written conditions replaces dozens of narrow roles.
Step five, write the tests. Not at the end. Alongside the policies. A test case for each allowed path and each denied path. If a rule has three conditions, you want three tests where each condition fails independently.
Done by hand, this whole cycle takes anywhere from a few days for a small system to a few weeks for something like a fintech platform with multiple tenants and regulatory carve-outs. Most of the time isn't typing, it's the back and forth with product and security on exactly what the rules should be.
Common mistakes to avoid
After five years of helping teams implement authorization, we see the same mistakes over and over.
- Over-privileging by default. "Admins can do everything" is almost never true. Real admins usually shouldn't touch payment data or production configuration. When you write "admin can *", you're not simplifying the policy, you're shipping a vulnerability.
- Role explosion. If you find yourself creating a new role for every edge case, stop. You need attributes, not more roles. Our analysis of common authorization errors calls this out as one of the top failure patterns, and the fix is almost always to replace three narrow roles with one role plus a condition on an attribute.
- Writing policies without tests. Untested policies are worse than no policies, because they create the illusion of safety. Every policy change should go through CI with a test suite attached.
- Not versioning policies. Policies are code. They belong in git, with PRs, reviewers, and history. If your policy changes happen in a UI with no audit trail, you've reinvented the hardcoded problem in a different shape.
How to write authorization policies with an AI agent
The workflow above is the right workflow. It's also slow, especially when you're doing it for the tenth time this quarter and the business has handed you yet another resource type with yet another half-defined set of rules.
This is exactly the kind of job where an AI coding agent earns its keep. The agent doesn't know your business requirements, but it knows the policy language, the common patterns, the testing conventions, and the compilation rules. You bring the judgment. It handles the mechanical work.
We packaged this into the Cerbos policy skill, a drop-in skill for Claude Code, Cursor, Codex, OpenCode, and any other agent that supports the skills protocol. You describe what you need in plain language. The skill asks the clarifying questions, generates the bundle, and validates the output against our real compiler. Five phases, in order. Detailed guide can be found here.
Spec intake. The skill asks questions in business language, not YAML. "Who can delete a project?" "Do admins genuinely need delete access, or is archive enough?" It pushes back on vague requirements because vague requirements are where security holes hide. Before writing a line of policy, it produces a short spec and asks you to confirm it.
Write. It generates everything in one pass. Principal and resource schemas go first. Then derived roles and shared variables. Then the actual resource policies. Then test fixtures and test suites. Every generated policy follows the patterns we use internally and recommend to customers. No wildcard actions. No overly permissive defaults. Conditions on every rule that needs them.
Validate. It runs docker run --rm -v "$(pwd):/policies" ghcr.io/cerbos/cerbos:latest compile /policies and checks for errors. Same compile step you'd run in CI.
Fix. If validation fails, the skill works through errors in priority order. YAML syntax first, then schema validation, then compile errors, then test failures. One fix per iteration. It will never delete a test to make things pass. After three attempts at the same error, it hands back to you with context.
Finalize. It reports what was created and flags any assumptions it made along the way. You review, push back, or ship.
The install is a single command:
npx skills add cerbos/skills --skill cerbos-policy
If you're using Claude Code directly, you can add the plugin marketplace instead:
claude plugin marketplace add cerbos/skills
We've been using this internally for a few months. The time savings on demo and customer POC work have been real, but the more interesting benefit is that the first draft is consistent. The skill doesn't get tired and skip test cases at 6pm. It doesn't copy-paste a wildcard because the meeting ran over. If you're new to Cerbos, that consistency also means you pick up the right patterns from day one instead of learning them the hard way.
Note: The Cerbos policy skill is a tool to help you get started. It is not a replacement for human review. Every policy the skill generates still needs eyes on it before it ships, because authorization is security and you should never trust AI with security. Use the skill to accelerate the draft, then review like you would any other PR.
Validating policies before they hit production
Whether you write policies by hand or with the skill, the validation loop is non-negotiable. At minimum, you want three layers.
Compile. The compiler catches schema errors, broken references, and malformed CEL conditions. Run it on every push. Our compile documentation walks through the step in detail.
Test. Unit tests for each policy covering allow paths, deny paths, and edge cases. The policy testing framework lets you define fixtures and assertions in YAML alongside the policies themselves.
Audit. Once policies are live, you need audit logs for every decision. This is the backstop. If someone writes a policy that drifts from intent, audit logs are how you catch it before the security team does.
Teams at Utility Warehouse have described the framework as something their engineers picked up in a few days. The policy syntax is one part of that. The test-first workflow is the other. If you can't test it, you can't trust it, no matter how clean the YAML looks.
Getting started
Authorization policies don't have to be a weekly tax. A clear authorization matrix, a policy engine that runs outside your application code, a real test suite, and an AI skill that handles the drafting gets you 80% of the way there on day one. The remaining 20% is judgment calls about what your business actually needs, which is where you'd rather be spending your time anyway.
If you want to see what this looks like with your own requirements, install the skill and point it at a scratch repo. It's free, it runs locally, and you'll know within ten minutes whether the output matches how you want your policies to look.
Try Cerbos Hub to deploy and manage the policies you generate, or book a workshop with a Cerbos engineer to talk through your authorization requirements.
Go deeper: Check out the Externalized authorization blueprint (eBook) for a full implementation playbook.
FAQ
Book a free Policy Workshop to discuss your requirements and get your first policy written by the Cerbos team
Recommended content

Mapping business requirements to authorization policy
eBook: Zero Trust for AI, securing MCP servers

Experiment, learn, and prototype with Cerbos Playground
eBook: How to adopt externalized authorization

Framework for evaluating authorization providers and solutions

Staying compliant – What you need to know
Subscribe to our newsletter
Join thousands of developers | Features and updates | 1x per month | No spam, just goodies.
