Once you have a high-level strategy in place, the next step is to define the principles that guide how you secure non-human identities and AI agents in practice. These principles form the foundation of day-to-day security work. In our previous article, we focused on strategy, a good first read if you are wearing the CISO or software architecture hat. This guide goes deeper into the technical side.
This article is a part of a larger guide on securing Non-Human Identities. If you're interested in tackling NHIs & AI agent risks and choosing the right toolkit, you can get the full ebook here. Now, let’s get back to the NHI security 🤖
The first step is to create and maintain a full inventory of every NHI and AI agent across your environment, as well as to find all the credentials issued for each of those identities, and how they are used. This reduces the chance of shadow identities or orphaned credentials being left unmanaged. An inventory is more than just a list. It has to be tied to how credentials are issued, rotated, and retired.
Practical steps:
Every service should have its own identity. Reused or shared identities across environments make it impossible to trace activity and open the door to lateral movement.
Practical steps:
In 2022, attackers used stolen Slack employee tokens to access the company’s private GitHub repositories. The problem was that those tokens were broad and not scoped to a single service, which meant one compromise unlocked multiple repos.
This shows why unique, dedicated machine identities are essential! If every service had its own tightly scoped identities, the attackers would not have been able to move so widely.
According to Software Analyst Cyber Research, 71% of ransomware attacks leveraged credential access tactics, and over 80% of cyberattacks involved compromised credentials. That makes over-scoped machine identities one of the greatest vulnerabilities in your infra. And yet overprivileged NHIs are still surprisingly common.
Though it may seem easier to give NHIs wide privileges, the principle of least privilege should apply to NHIs just as it does to human identities. That means only granting NHIs / AI agents the minimum permissions necessary, for just the window required, to perform their function. This limits the damage that can occur if an identity is compromised or misused.
Practical steps:
For companies like e-Global, one of Latin America’s largest electronic payment processors, tight access controls aren’t optional; they’re critical.
Fine-grained authorization gives you the control you need to secure your machine and human identities. Evaluating many dynamic attributes like identity type, environment, data sensitivity, time of day, or location gives you the ability to define exactly who (or what) can access what, when, and under what conditions.
Practical steps:
Authorization logic should never live inside the agent or service. Authorization logic must be decoupled from the application code.
The best way to achieve this is to externalize all permission checks in an external policy decision point that evaluates every request in real time based on identity, action, and context. This separation ensures:
To ensure separation, follow these steps:
AI agents and RAG systems must only retrieve and return data that the user is authorized to see. Without strict controls, sensitive information can be leaked.
Practical steps:
service_role
/super keys in agents; use read-only credentials and short TTLs.In 2025, Supabase’s MCP integration was hit by a prompt-injection attack that made an AI agent leak credentials via a support ticket without tenant isolation. Hidden instructions in a support ticket tricked the Cursor agent into reading from the integration_tokens
table with a powerful service_role
key and posting the data back into the ticket. With a scoped, read-only setup or stronger authorization at the data layer, this would not have happened. Here is a very nice slide from our MCP webinar on exactly this breach:
When looking at AI risks tied to NHIs, prompts should always be treated as untrusted input. Without validation, attackers can trick AI agents into exposing data or performing unsafe actions.
Practical steps:
In 2023, the “grandma exploit” showed how easily prompts could bypass controls. Attackers tricked ChatGPT into revealing Windows license keys by wrapping the request in a story. Even though the keys were generic, it demonstrated how prompt injection can override safeguards.
AI agents should not be able to act without limits. Every external action must pass through an authorization check and adhere to strict constraints.
Practical steps:
This is already happening. In 2025, “vibe-hacking” attacks showed how agents like Anthropic’s Claude could be turned into tools for cybercrime, planning and carrying out attacks with little input. Without clear policies and human oversight, agent actions can quickly spiral into breaches that are hard to stop.
No matter how many automated controls you have in place, human oversight is still essential to maintain secure AI and NHI workflows. That's why you need to integrate human monitoring into your AI systems.
Practical steps:
Here is a story to showcase why this is so important. In 2023, Samsung engineers pasted proprietary source code into ChatGPT to debug problems (well, we all use ChatGPT now, right?). Without oversight, sensitive IP was exposed outside the company. Stronger review and guidance would have definitely prevented that.
Visibility is non-negotiable. Every NHI action must be logged immutably and monitored in real time to detect misuse.
Practical steps:
Cloudflare’s 2023 compromise lasted longer than it should have because unrotated tokens with weak logging gave attackers room to operate. Stronger audit trails and anomaly detection would have caught it sooner. As Branden Wagner, Head of Information Security at Mercury, mentioned, compliance only sets the baseline - it is the auditability and controls that turn requirements into real defenses against incidents like these.
Security should be built into the development lifecycle, not bolted on later. Catch issues in CI/CD pipelines before they reach production.
Practical steps:
The six-step NHI strategy, and practical principles above, are directly connected. Each supports the other in the implementation of NHI security, as you can see in the table below:
NHI strategy | NHI principles |
---|---|
High-level plan to achieve secure, scalable management of NHIs across the organization. | Specific foundational tactics that guide how NHI security should be implemented. |
Tied to business and technical goals (e.g., reduce NHI sprawl, enforce compliance, reduce blast radius). | Tied to execution (e.g., use fine-grained access, separate human and machine credentials, centralize policy logic). |
Evolves with risk landscape and company maturity; owned by CISOs. | Stays relatively stable and informs tool/process decisions; owned by the security & engineering teams. |
If you want to dive deeper into NHI security practices:
I hope our 2-series guide was helpful to you.
Book a free Policy Workshop to discuss your requirements and get your first policy written by the Cerbos team
Join thousands of developers | Features and updates | 1x per month | No spam, just goodies.