IIW42 recap: Where agent authorization got real

Published by Alex Olivier on May 07, 2026
IIW42 recap: Where agent authorization got real

The Internet Identity Workshop (IIW) is hands down one of my favorite events of the year. No keynotes, no vendor booths, no slick decks. Just 3 days of unconference sessions at the Computer History Museum in Mountain View, where everyone working on the messy edges of identity and authorization shows up, scribbles a session topic on a sticky note, and argues it out for an hour. I've been to a lot of conferences. None of them get close to the signal-to-noise ratio of IIW.

This was IIW42, and it was the one where agent authorization stopped being theoretical. The hallway track was the same conversation as the session track: how do you actually authorize an agent? Authentication is mostly a solved problem. The harder question is what the PDP gets handed and what it has to reason about. Multiple agent identity frameworks got pitched, and multiple research demos converged on the same architecture: agent is a dumb LLM in a sandbox, the trust boundary is the tool invocation, and a deterministic policy engine is the truth. Same pattern we've been pushing at Cerbos.

The interesting fights happened a level up from identity primitives. They were about what changes once you accept that an authorization request is no longer one human plus one role. A full hour went into whether the subject should even be in the request, or whether you should just hand the PDP an array of tokens and let policy walk them. One camp argued "drop the principal, all you have is evidence". The other argued "you need a stable principal or your audit trail dies". Everyone agreed on one thing: the subject is now a vector. Workload plus human, sometimes plus sub-agent, sometimes plus device attestation. The "principal: id + roles" model isn't going to cut it.

2 ideas I keep coming back to. First, intent drift. An agent kicks off saying "book a hotel in Knoxville on April 23rd" and 50 tool calls later it's wiring money to a Cayman account. Each individual call might pass policy in isolation, but the gap between declared intent and runtime behaviour is where the actual risk lives. The action plane ("can this principal do this on this resource?") needs an intent plane next to it. Second, the cross-trust-domain problem. Inside one company, one PDP, one schema, sub-agent attenuation works fine. The minute the agent crosses into a system with its own vocabulary, your "edit photo" is their "modify metadata" and there's no clean translation. Nobody has a real answer yet. I think this is the meaty long-term problem in our space.

The line of the week, paraphrased: identity is optional for authorization, but identity is critical for accountability. That reframes a lot of the agent identity debate. You can make a sensible policy decision without knowing who the principal is (sometimes you genuinely can't). When something goes wrong at 3 AM, knowing who or what to point at is what actually matters. It's also a pretty clean argument for modelling authorization around capabilities (action × resource) rather than around identities. Traditional IGA inventories people. In an agentic world the agents are ephemeral, the identities are exploding, and most "access" is held by short-lived workloads you'll never review in time. The right primary key lives at the door, where the risk actually is. Walked away from IIW42 with more questions than answers. Which is exactly why I keep going.

FAQ

What was the main theme of IIW42?

What is intent drift in AI agent authorization?

Why does a deterministic policy engine matter for AI agent authorization?

Book a free Policy Workshop to discuss your requirements and get your first policy written by the Cerbos team