In this post, we’ll break down insights from a recent Software Engineering Radio episode with Cerbos co-founders Emre Baran and Alex Olivier, where they spoke with Priyanka Raghavan, host of the podcast, on building stateless, externalized (decoupled) authorization frameworks.
We’ll explore authentication vs authorization, compare access control models, discuss why granular access is crucial (with real-world failures when it’s missing), examine the benefits and challenges of externalized authorization, and delve into how to implement and evolve a policy-based system. We’ll also touch on when to build vs buy authorization, and even how these concepts apply to AI-driven systems – including why authorization decisions must remain deterministic.
You don’t need to have listened to the podcast to follow along – but if you’re curious, you can check out the recording below. Let’s dive in.
Before jumping into authorization frameworks, it’s critical to distinguish authentication from authorization. Authentication is about verifying who you are – confirming your identity (for example, via a password, passport, or biometric) and establishing what attributes or roles are tied to you. Authorization, on the other hand, is about determining what you’re allowed to do now that your identity is known. In other words: just because you’ve logged in (authN) doesn’t mean you can access everything – authorization is the guard that decides which actions you can perform or which resources you can access.
For further details, feel free to review our blog on this exact topic.
Why stress this difference? Because it’s common to implement authentication (login, identity management) and assume authorization is “handled” by simple role checks. But broken authorization is consistently the number one web app security risk (OWASP’s Top 10). Robust authorization needs its own focus – which leads us to thinking about models and frameworks purpose-built for the task.
So how do we decide who can do what? Over the years, several authorization models have emerged:
There’s no one-size-fits-all model – often real-world systems blend them. It’s common to start with RBAC, sprinkle in a few attribute checks, making it “RBAC with ABAC elements”, or use relationships for specific features. PBAC often emerges when teams want centralized policy control, better auditability, and faster iteration on permission logic. Alex notes that the right approach “really goes back to your requirements” . The key is to design a model that is granular enough to enforce least privilege, but not so complex that it's unmanageable. For more insights, check out our blog on mapping business requirements to authorization policy.
Having fine-grained authorization isn’t just an academic concern – it can make or break the security of your product. Emre emphasizes that many companies learn this the hard way: coarse or ad-hoc access controls often lead to embarrassing and dangerous failures.
For example, imagine a neobank that lets you open a business account. If the app doesn’t support roles or permissions on that account, every user you share the login with could have full access to all features. One employee could initiate unlimited fund transfers or view all financial data simply because the system lacks more granular controls. This “all or nothing” access might be acceptable in a tiny startup, but as usage grows it becomes a massive risk – least privilege goes out the window.
In a more dire case, consider a large ride-sharing company in its early days. They built internal tools for customer support and ops teams – but without proper authorization partitioning. The result? Employees had “God mode” access to everything, including sensitive customer travel records. In one notorious incident, employees were able to pull up private data about celebrity riders’ trips, with no real justification. Obviously, that should never have been allowed – only specific support staff under specific circumstances (say, handling a complaint) should see a user’s trip details. The absence of granular role-based rules or contextual checks was a privacy disaster waiting to happen.
These examples underline a key point: authorization is not optional. Broken access control is consistently the top web security issue for a reason. If you don’t design proper permissions, users will do things they shouldn’t, or internal actors will abuse overly broad access. It can lead to data breaches, compliance violations, and loss of user trust. As a developer or architect, you need to bake in the right authorization model early – otherwise you’ll be scrambling to retrofit it later, likely after an incident forces your hand.
Many teams start with simple in-code checks (an if-then-else logic). That works for a while. But as your application grows – especially if you adopt microservices or multiple modules – those scattered checks become technical debt. Every new permission requirement means hunting down and updating logic in many places, potentially in different languages or repos. It’s error-prone and hard to keep consistent.
The remedy is to centralize and externalize authorization logic into its own component or service – essentially taking all those hardcoded rules out of your application code and moving them into a central policy decision point. Alex explains that if you continue embedding checks in each service, you’ll eventually have “spaghetti code” as requirements evolve. Instead, by externalizing (decoupling), you aim for a single source of truth for permissions.
What does an externalized authorization architecture look like? In practice, you introduce an authorization service (it could be a standalone server, a library, or sidecar – more on deployment in a bit) that knows how to evaluate policies. Your application code, instead of doing local if role == "admin"
logic, will call this service and ask “Is user X allowed to do action Y on resource Z?” The service will evaluate the request against your centralized policies and respond “allow” or “deny.” This design is often drawn from the standard XACML model: a Policy Decision Point (PDP) answering allow/deny, which your app uses before executing sensitive actions.
By doing this, you gain several big advantages:
In summary, externalizing authorization means treating access control as a first-class component of your architecture rather than an afterthought. It’s about centralizing the “who can do what” logic so that it’s easier to change, reason about, test, and scale.
To learn more about the pros and cons of externalizing authorization, feel free to check out these two blogs: Benefits, trade-offs.
Externalizing is step one – but how you implement that authorization service matters too. The modern approach discussed by Emre and Alex is to keep the authorization layer stateless and policy-driven.
As we touched on earlier, PBAC means the rules are expressed as declarative policies (think of them like firewall rules or configuration) rather than hardcoded in code. Cerbos, for instance, uses YAML files to define policies for each resource type (e.g., an “Invoice” policy file might declare who can view
, edit
, or pay
an invoice). These policies can include role checks, attribute conditions, and even relational conditions – basically capturing RBAC/ABAC logic in a structured way. The key benefit is that policies are versionable and testable artifacts. They live in Git, you can do code reviews on them, write tests for them, and evolve them alongside your application. They also provide a layer of abstraction: your app code asks “can X do Y to Z?” and doesn’t need to know the fine details – it’s all in the policy.
Now let’s talk about “stateless”. A stateless authorization service does not maintain its own database of users, roles, or sessions. It doesn’t store who has access to what internally. Instead, all necessary context, such as user attributes, resource attributes, and relationships, must be supplied with each check request, or fetched from external sources. Why stateless? Because it massively simplifies scaling and deployment. If the PDP doesn’t need to replicate any state, you can run a PDP instance anywhere – even one per application instance – without worrying about data consistency between them. Cerbos is designed to be stateless in this manner: it loads the policy files into memory and evaluates requests purely based on the input context, much like a function. This means you can deploy lots of PDP instances (as sidecars, or as a library in-process) to eliminate network latency, since none of them need a shared session or cache. Every instance is identical, just holding the policy logic.
Stateless, however, doesn’t mean “no data”. It means the authorization service itself isn’t the source of truth for user or object data – you might still need to pass in data or references. In practice, adopting stateless authorization pushes you to provide rich context with each check. A common pattern is token enrichment: include the user’s roles, groups, or other claims in their authentication token so that the PDP doesn’t have to ask another service for them. If your IdP (Identity Provider) doesn’t put all needed info in the token, you might extend it or have a middleware fetch extra attributes. The benefit is that a well-enriched token allows the PDP to make a decision immediately and autonomously. The trade-off is deciding how much to pack into tokens - too much can bloat them and risk stale data. It’s a design consideration, but one that comes with embracing statelessness.
In short, stateless PBAC systems like Cerbos treat authorization as pure functions: given input (who, what, action, context), output a decision. No hidden state. This purity yields consistency and trustworthiness – and as a bonus, it’s easier to test, since you don’t need to simulate some database of users; you just feed in context to your test harness. As Emre noted, building Cerbos specifically for the application layer, and not as a general-purpose policy engine, also allowed them to keep it lightweight and fast, with minimal CPU/memory overhead. In fact, he shared that running Cerbos as a sidecar adds virtually zero noticeable load to an app, but provides huge flexibility in return.
A question that often arises: “This sounds great, but do we really need a third-party solution? Can’t we just implement our own authorization service internally?” It’s a fair question, especially for organizations with unique requirements. Emre and Alex have firsthand experience here – before founding Cerbos, they collectively built custom authz systems “10 times” across various companies. They realized they were solving the same problem over and over, and it wasn’t adding business value, since authz is critical, but it’s undifferentiated infrastructure for most apps.
For a deep-dive on the build vs. buy question, feel free to read through this blog. Otherwise, let’s continue with a very brief summary.
The bottom line is, externalized authorization is becoming a known best practice – and there’s a growing ecosystem of libraries and services such as Cerbos PDP to help you implement it. Unless you have a very compelling reason to create your own from scratch, you’re probably better off evaluating existing options. Even big firms are open-sourcing their authz systems (e.g. OPA, Amazon’s Cedar language, etc.) because the community recognizes the need to not duplicate this sensitive plumbing. Adopting and adapting an existing solution lets your team spend time on features that matter to your users.
An intriguing part of the discussion was how emerging AI/LLM-based features introduce new challenges for authorization. We’re now in an era where apps might have a chatbot interface or an AI agent that interacts with users and possibly takes actions on their behalf. How does our carefully crafted authorization model apply here? Priyanka (the SE Radio host) posed the scenario of companies building chatbots or using large language models (LLMs) connected to their data. Traditionally, we secured the backend (APIs, databases) and maybe the frontend. But with an AI in the mix, there’s effectively a third layer: the AI system that might bypass normal app logic.
Consider a company analytics chatbot. If a CEO asks “Show me the total payroll for the company,” the bot should fetch and answer with company-wide data, since the CEO is allowed to see everything. If a regional manager asks the same question, the bot should only reveal the payroll for that manager’s region. If our authorization is properly in place, the underlying API call for “get payroll” would enforce that. But what if the AI can access data directly or generate answers from a vector database? If the AI isn’t constrained, it could inadvertently leak information across boundaries. Emre describes this as the AI potentially bypassing your backend and frontend security if not controlled. We’ve seen real incidents: a certain car dealership’s chatbot was tricked (via prompt injection) into essentially giving away a car for $1 due to no safeguards (the “Chevy chatbot” story Emre alluded to). Airlines also saw people exploit chatbots to get unauthorized discounts or refunds. These are like new-age security vulnerabilities.
The solution? Apply authorization checks to AI outputs and actions too. Emre suggests that every AI agent or chatbot that can act on user data should have a PDP in front of it, governing its access. In practice, this might mean two things:
The broader point is that AI does not remove the need for authorization – in fact, it makes it more important to enforce it. Large language models are great at generating outputs from data, but they have no inherent notion of a user’s permissions or privacy. It’s up to us to impose those constraints. By integrating your authz service with your AI system’s retrieval and execution steps, you mitigate the risk of LLM “hallucinations” exposing sensitive info or performing unauthorized actions. You essentially sandbox the AI to operate within the user’s allowed boundaries.
Alex and Emre also discussed where AI/ML can help in the authorization space, as opposed to being a risk. Two promising areas:
However, one area they were adamant not to involve AI: the core decision engine itself. Authorization decisions should be deterministic and based on explicit rules, not the probabilistic whims of an ML model. You wouldn’t want a neural network deciding whether to grant access, possibly giving different answers each time or being sensitive to odd inputs. Alex quipped about not wanting to worry about the “temperature” setting of a model when it comes to security. The risk of false positives/negatives and the lack of explainability make AI unsuitable to replace a rules engine. Instead, keep the enforcement layer clear-cut – every input either meets the policy or not, and the outcome is guaranteed to be the same given the same inputs. Determinism is crucial for trust here. We can leverage AI around this core, to write policies or analyze outcomes, but the enforcement should remain solid code – whether that’s a handcrafted engine like Cerbos or any other predictable system.
This perspective rings true: use AI to enhance the developer and admin experience of managing authorization, but not to actually determine permissions on the fly. The latter could lead to unpredictable security which no one wants.
Modern application security demands a robust approach to authorization – one that is granular, scalable, and maintainable. As we’ve seen, the industry is moving away from sprinkling role checks in code and toward dedicated, stateless authorization services governed by clear policies. Adopting this model brings tangible benefits: you can evolve access rules quickly, avoid security slip-ups that come with ad-hoc implementations, and gain a holistic view of who is doing what in your system.
Modern authorization is about separating policy from code and treating it as a core part of your architecture. The approaches discussed aren’t just theoretical – they’re being used in production by companies large and small to secure everything from B2B SaaS platforms to fintech apps. As the complexity of systems and the stakes of breaches continue to rise, having a solid authorization foundation is as important as having a solid authentication system.
Keep authorization in mind from day one. Embrace granular controls, leverage decoupled frameworks, and you’ll be rewarded with a more secure application that can adapt to change with less fuss. And if you haven’t already, check out some of the open-source projects in this space, such as Cerbos PDP – a little investment now in setting up a PBAC system can save you countless hours and incidents down the road.
Finally, if you’re curious to dig deeper and implement what we’ve discussed in practice, check out the Cerbos documentation and join the Cerbos Slack community. Let’s build applications that are not just feature-rich, but also secure by design – no “God mode” surprises for our users.
Book a free Policy Workshop to discuss your requirements and get your first policy written by the Cerbos team
Join thousands of developers | Features and updates | 1x per month | No spam, just goodies.