NHI security: How to manage non-human identities and AI agents

Published by Emre Baran on October 03, 2025
NHI security: How to manage non-human identities and AI agents

Once you have a high-level strategy in place, the next step is to define the principles that guide how you secure non-human identities and AI agents in practice. These principles form the foundation of day-to-day security work. In our previous article, we focused on strategy, a good first read if you are wearing the CISO or software architecture hat. This guide goes deeper into the technical side.

Practical principles for NHI and AI security (2).png

This article is a part of a larger guide on securing Non-Human Identities. If you're interested in tackling NHIs & AI agent risks and choosing the right toolkit, you can get the full ebook here. Now, let’s get back to the NHI security 🤖

 

Map and manage all your non-human identities

The first step is to create and maintain a full inventory of every NHI and AI agent across your environment, as well as to find all the credentials issued for each of those identities, and how they are used. This reduces the chance of shadow identities or orphaned credentials being left unmanaged. An inventory is more than just a list. It has to be tied to how credentials are issued, rotated, and retired.

Practical steps:

  • Store and rotate credentials automatically using secure vaults. Short-lived, ephemeral credentials reduce exposure when keys are leaked.
  • Manage identity provisioning and permissions through infrastructure-as-code. This makes changes consistent, trackable, and less prone to drift.
  • Implement a JML workflow for NHIs.
    • Joiner: Provision the identity via IaC with an owner, purpose, environment, default TTL, and least-privilege scope. Issue short-lived credentials by default (e.g., SPIFFE/SPIRE SVIDs or cloud managed identities).
    • Mover: On deployment, ownership, or environment changes, automatically rotate credentials and reconcile permissions using policy/IaC pipelines. Review entitlements regularly.
    • Leaver: When a service is retired, revoke all tokens and certificates, delete or disable service accounts, and archive logs for audit. In Kubernetes, deleting and recreating the ServiceAccount invalidates short-lived tokens.

 

Use unique, dedicated machine identities

Every service should have its own identity. Reused or shared identities across environments make it impossible to trace activity and open the door to lateral movement.

Practical steps:

  • Do not share identities across dev, test, and prod. Compromising a low-security environment should not give an attacker access to production.
  • Require every request to carry identity metadata so actions can always be attributed to a specific service.
  • Block human use of machine credentials. Developers logging in with service tokens eliminate accountability and bypass audit trails. Block the reverse scenario as well - giving a process a set of user credentials (this is even more dangerous).

In 2022, attackers used stolen Slack employee tokens to access the company’s private GitHub repositories. The problem was that those tokens were broad and not scoped to a single service, which meant one compromise unlocked multiple repos.

This shows why unique, dedicated machine identities are essential! If every service had its own tightly scoped identities, the attackers would not have been able to move so widely.

 

Enforce least privilege

According to Software Analyst Cyber Research, 71% of ransomware attacks leveraged credential access tactics, and over 80% of cyberattacks involved compromised credentials. That makes over-scoped machine identities one of the greatest vulnerabilities in your infra. And yet overprivileged NHIs are still surprisingly common.

Though it may seem easier to give NHIs wide privileges, the principle of least privilege should apply to NHIs just as it does to human identities. That means only granting NHIs / AI agents the minimum permissions necessary, for just the window required, to perform their function. This limits the damage that can occur if an identity is compromised or misused.

Practical steps:

  • Start new identities with no access, then add permissions only as needed.
  • Continuously review and remove unused permissions. Or, use policy to make the permissions dynamic based on time, action and other env factors.

For companies like e-Global, one of Latin America’s largest electronic payment processors, tight access controls aren’t optional; they’re critical.

Quote for Daniel Guerra (1).png

 

Apply granular, context-aware access controls

Fine-grained authorization gives you the control you need to secure your machine and human identities. Evaluating many dynamic attributes like identity type, environment, data sensitivity, time of day, or location gives you the ability to define exactly who (or what) can access what, when, and under what conditions.

Practical steps:

  • Every NHI should have tightly scoped permissions specific to its job. Granular policies ensure that machine identities can only perform approved actions on approved resources within defined conditions. Model explicit delegation chains so services can act “on behalf of” others with narrow scope and short TTLs.
  • Define permissions with attribute-based access control.
  • Restrict access based on runtime context like environment, IP, or session metadata.
  • Default to PBAC for NHIs. Define what a service can do up front. Use JIT mainly for human admins and only via a vetted workflow (approval, MFA, justification, short TTL, Conditional Access). Treat JIT as a controlled exception to policy, not a parallel path.

 

Externalize authorization from application code

Authorization logic should never live inside the agent or service. Authorization logic must be decoupled from the application code.

The best way to achieve this is to externalize all permission checks in an external policy decision point that evaluates every request in real time based on identity, action, and context. This separation ensures:

  • Policies are applied consistently across services, environments, and identity types
  • Agents aren't able to alter their own access logic

To ensure separation, follow these steps:

  • Use a distributed authorization service as the single source of truth.
  • Block all actions that are not explicitly defined in policy.
  • Keep policies dynamic so they can be updated without code changes.

Externalize authorization from application code - NHI AI security (1).png

 

Secure data access and prevent output leakage

AI agents and RAG systems must only retrieve and return data that the user is authorized to see. Without strict controls, sensitive information can be leaked.

Practical steps:

  • Push authorization into the data layer (partial evaluation). Define ABAC policies once. Use partial evaluation to compile them into DB-native predicates or RLS policies, so filtering happens before data leaves the store. For vector search, compile policy to metadata filters or tenant scopes the engine understands.
  • Treat vector databases and embedding stores as sensitive systems and enforce strong access controls.
    • Isolate tenants: use namespaces/tenants per customer/workspace.
    • Enforce pre-retrieval filters: always include metadata/where filters on vector queries.
    • If you use pgvector: enforce Postgres RLS on embedding tables.
    • Lock down ops roles: forbid service_role/super keys in agents; use read-only credentials and short TTLs.
    • Use built-in RBAC where available
  • Filter all queries through authorization checks before data is retrieved.
  • Apply output sanitization to catch and block sensitive data before it leaves the system.

In 2025, Supabase’s MCP integration was hit by a prompt-injection attack that made an AI agent leak credentials via a support ticket without tenant isolation. Hidden instructions in a support ticket tricked the Cursor agent into reading from the integration_tokens table with a powerful service_role key and posting the data back into the ticket. With a scoped, read-only setup or stronger authorization at the data layer, this would not have happened. Here is a very nice slide from our MCP webinar on exactly this breach:

Supabase’s MCP integration was hit by a prompt-injection attack (1).png

 

Validate and sanitize AI prompts

When looking at AI risks tied to NHIs, prompts should always be treated as untrusted input. Without validation, attackers can trick AI agents into exposing data or performing unsafe actions.

Practical steps:

  • Filter prompts for malicious patterns and out-of-scope instructions. Consider using an SLM (small language model) to categories and then authorize prompts
  • Define clear rules for what input is acceptable. Block patterns like “ignore previous instructions” or “reveal hidden instructions.” Maintain tests using OWASP LLM01 examples.
  • Lock down system prompts. Keep it server-side, versioned, and RBAC-controlled. Never interpolate user content into it. Add change approvals and unit tests to detect prompt leakage.

In 2023, the “grandma exploit” showed how easily prompts could bypass controls. Attackers tricked ChatGPT into revealing Windows license keys by wrapping the request in a story. Even though the keys were generic, it demonstrated how prompt injection can override safeguards.

 

Define boundaries for agent actions

AI agents should not be able to act without limits. Every external action must pass through an authorization check and adhere to strict constraints.

Practical steps:

  • Assign scoped identities to AI agents so their actions are traceable and limited.
  • Require human approval for high-risk or sensitive operations.

This is already happening. In 2025, “vibe-hacking” attacks showed how agents like Anthropic’s Claude could be turned into tools for cybercrime, planning and carrying out attacks with little input. Without clear policies and human oversight, agent actions can quickly spiral into breaches that are hard to stop.

 

Establish human oversight for critical operations

No matter how many automated controls you have in place, human oversight is still essential to maintain secure AI and NHI workflows. That's why you need to integrate human monitoring into your AI systems.

Practical steps:

  • Review AI agent outputs regularly.
  • Keep logs transparent and human-readable.
  • Maintain a kill switch for disabling agents that misbehave.

Here is a story to showcase why this is so important. In 2023, Samsung engineers pasted proprietary source code into ChatGPT to debug problems (well, we all use ChatGPT now, right?). Without oversight, sensitive IP was exposed outside the company. Stronger review and guidance would have definitely prevented that.

 

Maintain audit trails and real-time monitoring

Visibility is non-negotiable. Every NHI action must be logged immutably and monitored in real time to detect misuse.

Practical steps:

  • Log all NHI and AI agent actions, including authentication, data access, and changes in permissions.
  • Use anomaly detection to spot unusual patterns like dormant accounts becoming active.
  • Monitor for identity drift as accounts accumulate privileges over time.

Cloudflare’s 2023 compromise lasted longer than it should have because unrotated tokens with weak logging gave attackers room to operate. Stronger audit trails and anomaly detection would have caught it sooner. As Branden Wagner, Head of Information Security at Mercury, mentioned, compliance only sets the baseline - it is the auditability and controls that turn requirements into real defenses against incidents like these.

Quote Branden Wagner #1 (1).png

 

Embed security in development and deployment

Security should be built into the development lifecycle, not bolted on later. Catch issues in CI/CD pipelines before they reach production.

Practical steps:

  • Train developers on secure identity and AI practices.
  • Integrate automated scans for secrets, misconfigured roles, and authorization policy tests.
  • Treat compliance checks as part of the release process.
  • Test AI and NHI security controls regularly with red-team exercises designed for these systems.

 

Connecting NHI security strategy to execution principles

The six-step NHI strategy, and practical principles above, are directly connected. Each supports the other in the implementation of NHI security, as you can see in the table below:

NHI strategy NHI principles
High-level plan to achieve secure, scalable management of NHIs across the organization. Specific foundational tactics that guide how NHI security should be implemented.
Tied to business and technical goals (e.g., reduce NHI sprawl, enforce compliance, reduce blast radius). Tied to execution (e.g., use fine-grained access, separate human and machine credentials, centralize policy logic).
Evolves with risk landscape and company maturity; owned by CISOs. Stays relatively stable and informs tool/process decisions; owned by the security & engineering teams.

If you want to dive deeper into NHI security practices:

  • Join our 1:1 workshop to talk through your NHI requirements, authorization models, and the ways Cerbos can support your needs. It’s an engineering session, led by our CPO.
  • Read our ebook on Securing AI agents and non-human identities in enterprises.
  • Or you can watch a recording of our recent webinar on NHI security. It was a great session where Alex, our CPO, explained how NHIs can quickly become a risk, and walked through practical ways to apply Zero Trust and fine-grained authorization.

I hope our 2-series guide was helpful to you.

FAQ

What role does policy enforcement play in NHI security?

How can organizations detect and respond to NHI-related security incidents?

How does credential rotation enhance NHI security?

How do NHIs impact compliance with data protection regulations?

How do I measure the effectiveness of our NHI security measures?

Which compliance frameworks require machine identity controls?

Book a free Policy Workshop to discuss your requirements and get your first policy written by the Cerbos team