Key takeaways from Workload Identity Day 0 at KubeCon

Published by Alex Olivier on November 27, 2025
Key takeaways from Workload Identity Day 0 at KubeCon

Workload Identity Day 0 at KubeCon crystallized something we've been watching build for the past year. Workload identity has moved from optional to foundational. Just as we're getting comfortable with it for traditional services, a new wave is coming that will test everything we've built.

 

SPIFFE won

The workload identity wars are over. SPIFFE has won. Across every presentation, every hallway conversation, every demo, SPIFFE was the assumed standard. Not debated. Not questioned.

Uber alone issues over a billion SPIFFE-based credentials per day. This isn't experimental infrastructure. It's battle-tested at hyperscale. The ecosystem has converged, and that convergence matters more than you might think.

When everyone rallies around a common standard, it unlocks innovation at higher layers. We're done arguing about how to establish workload identity. Now we can focus on what to do with it.

 

The non-human identity explosion nobody talks about

For every human identity in your organization, there are approximately 80 non-human identities. Service accounts, API keys, machine credentials, OAuth tokens, username-password combinations buried in automation scripts. These are all identities, methods of granting access to services, and they've proliferated like weeds.

We created this sprawl because we had to. Services need to talk to services. Automation needs to run. CI/CD pipelines need credentials.

But we've been managing these 80-to-1 non-human identities with mechanisms designed for humans. Or worse, with no real management at all. That API key sitting in a config file is an identity. That service account credential copied across three environments "just temporarily" is an identity. That OAuth token that was supposed to expire but somehow still works three years later is also an identity.

We've built castles on foundations of sand. Through network segmentation, implicit trust, and honestly a fair bit of luck, it's been mostly working. Until now.

 

AI exposes what we've been ignoring

AI systems aren't creating this identity problem. They're brutally exposing fundamental flaws in how we've been architecting identity management for non-human actors. We've been treating identity as an afterthought, compensating for weak identity mechanisms with hope that nothing goes wrong.

AI agents will be yet another category of non-human identity. They'll operate at a scale and speed that exposes every shortcut we've taken. Unlike service accounts or API keys that mostly do the same thing repeatedly, AI agents will be dynamic, exploratory, and entirely unpredictable.

We need proper mechanisms and controls in place to manage non-human identities effectively. Not just for AI agents, but for everything. The systems we're building now need to treat identity with the same rigor we're finally applying to workload identity through standards like SPIFFE.

 

AI agents are just workloads that happen to be terrifying

The consensus that emerged from the day's discussions was clear. AI agents are workloads. They need identities and credentials just like any other service.

The authentication part, proving "this is agent X", is actually the easy part. SPIFFE handles that beautifully. Issue the agent a SPIFFE identity, give it credentials, done. But authentication is where the similarity ends.

Once that AI agent has proven who it is, the hard question becomes what it should be allowed to do. This is where authorization becomes critical. This is where the unique behavioral characteristics of AI agents make everything complicated.

A traditional workload is predictable. Your payment service talks to the database, the message queue, maybe an external API. You can write policies for that. You can reason about that.

An AI agent might need to access the customer database, then decide it needs to check inventory, then realize it should send an email, then figure out it needs to update a ticket in your support system. All in pursuit of a single user request. The access patterns are dynamic, context-dependent, and fundamentally unpredictable. This is why fine-grained authorization controls become non-negotiable for AI workloads.

 

AI systems behave like employees without morals

During a panel discussion, we explored an uncomfortable metaphor. AI systems behave like employees without morals. Not because they're malicious, but because they fundamentally lack the human context that informs every decision we make.

A regular employee with database admin access won't actually drop the production database. Even if they technically have permission. They've been trained, they understand consequences, they've sat through cybersecurity awareness courses. They have that internal voice asking "should I really do this, what's the blast radius here?"

AI agents don't have that voice. Give an AI agent access to functionality, whether through delegation or worse through impersonation, and it will attempt every single action available to achieve its goal. No hesitation, no second-guessing, no concept of "maybe I shouldn't do that because it's not necessary for what I'm trying to accomplish."

It's like giving an intern admin access to every system and telling them to "figure it out." Except this intern will simultaneously try every button, flip every switch, and pull every lever at machine speed. Zero understanding of side effects, dependencies, or why some things shouldn't be done just because they can be. If you give an AI system access to the world, it will try to use the entire world, all of it, all at once, to reach its objective.

 

We've been lying to ourselves about security

AI isn't creating new security problems. It's exposing the ones we've been living with all along. For years, we've built systems where identities have far more permissions than they need. We've compensated for overly permissive access controls by relying on human judgment.

We've accepted "well, they could do that, but they won't" as an actual security posture. That worked, sort of, barely. Because every identity was attached to a human with context, training, institutional knowledge, and crucially, self-preservation instincts.

That security model completely falls apart when those identities are attached to AI agents operating at machine speed. There's no training, no context, no self-preservation. Just relentless optimization toward a goal, regardless of collateral damage.

 

Least privilege but actually this time

The solution isn't revolutionary. It's the principle we've been talking about for decades but have been consistently terrible at implementing. An identity should only be able to do what it needs to do, and nothing more.

True least privilege, actually enforced, not aspirational. With AI agents on the horizon, this isn't just security best practice anymore. It's existential. If we don't get this right, we're handing autonomous systems the keys to everything and hoping they figure out restraint on their own. They won't.

What does getting it right actually look like? Issue proper identities to AI agents. These ephemeral, non-human actors need trusted identity from a verifiable source. SPIFFE gives us the mechanism to do this. We can't have AI agents running as "system" or worse, impersonating user accounts.

Implement granular authorization. Not "can this agent access the database" but "can this agent read these specific records, for this specific purpose, right now, given this context." Policy-based authorization that evaluates every request against context.

Make authorization decisions based on the full picture. Who or what is requesting access, what are they trying to do, what resources are involved, what's the broader context, what's the risk profile of this request compared to normal behavior. Treat AI agents as untrusted by default, not because they're malicious, but because they lack the judgment layer that humans provide.

 

Security needs to shift down not left

Something keeps coming up in conversations with security leaders, platform teams, and engineers. Security isn't a shift left problem anymore, it's a shift down problem. It should be standardized, built into the architecture, the frameworks, the toolchain, so products inherit it by default.

For years, shift left has meant handing more security responsibility to engineers. More SDKs to learn, more configuration to manage, more time spent reinventing the same controls across every microservice. It scales effort, not assurance. Every team builds their own authorization logic, their own identity handling, their own audit logging.

A shift down approach flips that model. Security becomes part of the foundation, a shared service, a reusable component that every product automatically benefits from. Instead of asking each engineering team to become security experts, you build security into the platform layer where they work.

This matters especially for authorization with AI agents. You can't have 50 different teams implementing 50 different authorization patterns for their AI-powered features. The blast radius of getting it wrong is too high. The complexity of doing it right is too demanding. Authorization needs to be a platform capability that teams consume, not a problem every team solves independently.

When authorization lives in the platform, updating policies to handle new AI agent behaviors becomes a platform change. Not a cross-team coordination nightmare. Not months of engineering effort scattered across dozens of services. One change, propagated everywhere, enforced consistently.

 

Provisioning identity is just the beginning

Provisioning identities is only half the battle. Integration is what drives actual adoption. This came up repeatedly throughout the day.

You can stand up the most elegant SPIFFE deployment in the world. Issue credentials to every workload, have perfect attestation and rotation. And still fail if your applications can't easily consume those credentials.

Many organizations hit a wall here. They've invested months building out their workload identity infrastructure. Then they discover that getting their hundreds of services to actually use it is going to take years of developer time. Each team needs to modify their applications, understand the SPIFFE APIs, handle credential rotation, deal with edge cases.

Unless consuming workload identity is easier than the alternative, adoption will stall. Easier than hardcoded secrets, easier than long-lived tokens, easier than "just put it in the env var." Developers will find workarounds. The beautiful identity infrastructure becomes shelfware.

Low-friction integration matters as much as the identity infrastructure itself. The path to broad adoption isn't through mandates and security policies. It's through making the secure path the easy path.

 

Observability becomes a compliance and risk requirement

Here's what changes when you have proper workload identity everywhere. You can finally answer the questions that auditors and risk teams actually care about. Not "do you have security controls" but "can you prove what happened when something went wrong."

Most organizations can't answer basic questions about their systems under audit. Which service accessed customer PII last Tuesday? What credential was used? Was that access authorized by policy or did it slip through a gap? When that anomalous database query ran at 3am, what identity made the request?

Without workload identity, these questions require forensic archaeology across log files, correlating IP addresses with network maps, guessing which service account might have been shared across three teams. With SPIFFE-based identity, every action has a cryptographic identity attached. The audit trail exists by default.

This matters more as regulatory scrutiny intensifies. GDPR Article 30 requires records of processing activities. SOC 2 demands you can trace access to sensitive data. FedRAMP requires you prove least privilege is actually enforced, not just documented in a policy somewhere.

But beyond compliance checkbox exercises, proper observability changes your risk posture fundamentally. When an AI agent does something unexpected, you need forensic-grade visibility. Not guesswork about what might have happened. Concrete evidence of which agent, operating under what identity, made what sequence of decisions, evaluated against which policies.

The difference between "we think service X did this" and "agent Y with SPIFFE ID Z accessed resource A at timestamp T" is the difference between fumbling in the dark and actually understanding your systems. For AI agents operating at machine speed, that understanding isn't optional anymore.

 

The window is closing

We're moving toward a world where AI agents become standard infrastructure. Making decisions, taking actions, orchestrating workflows across systems. To operate safely in that world, we need to be able to rationalize about what these ephemeral agents are doing from a trusted identity source.

We have the primitives. SPIFFE gives us workload identity. Policy-based authorization gives us fine-grained control. Zero trust architecture gives us the framework. The challenge is that we actually have to implement them properly this time. Because the AI agents coming online won't give us the benefit of human judgment to paper over our architectural mistakes.

Workload Identity Day at KubeCon made one thing abundantly clear. The time to solve this isn't when AI agents are everywhere. It's now, while we still have the luxury of thinking through the implications before they're operating in production. Because once they are, it'll be too late to realize we built the foundation on sand.

Cerbos provides policy-based, context-aware authorization designed for a world where not every identity has a human attached. Learn how teams are implementing fine-grained authorization that's ready for autonomous agents with Cerbos.

FAQ

What is Workload Identity Day 0?

Why did Workload Identity Day 0 matter this year (2025) ?

What were the key takeaways from Workload Identity Day 0 2025?

What should organizations do after Workload Identity Day 0 2025?

Book a free Policy Workshop to discuss your requirements and get your first policy written by the Cerbos team