How Are You Handling Identities for AI Agents?
Key topics
From what I’ve seen, many treat agents like microservices, giving them app-style identities, but that feels off to me. That model comes from Web2 application identity systems, and I’m not sure it fits the new context we’re entering.
As we move into the AI age, I suspect we’ll need new forms of identity and authorization specifically designed for agents, especially since existing frameworks like OIDC have some clear limitations.
Would love to hear your thoughts or see what others are experimenting with.
The author is seeking opinions on how to manage identities for AI agents, sparking a discussion on the limitations of current identity frameworks and potential new approaches.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
1h
Peak period
5
1-2h
Avg / period
2.3
Key moments
- 01Story posted
Nov 1, 2025 at 10:02 AM EDT
2 months ago
Step 01 - 02First comment
Nov 1, 2025 at 11:06 AM EDT
1h after posting
Step 02 - 03Peak activity
5 comments in 1-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 1, 2025 at 5:29 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The microservice identity model breaks down when you have chains of agents, each potentially operating with different levels of autonomy and trust. OIDC was designed for human-to-service flows, not for dynamic agent-to-agent delegation where the context, scope, and risk profile can shift rapidly. I've been thinking we might need something closer to capability-based security or macaroons—where delegation is explicit, scoped, and auditable at each step. The key difference: instead of "who is this agent?" we should be asking "what specific action is this agent authorized to perform right now, and who in the chain vouches for it?"
I have been experimented with SPIFFE/SPIRE for agent identity or explored using verifiable credentials for delegation chains.
Verifiable Credentials (VCs) solve a different problem. They’re decentralized, flexible, and can express explicit delegation chains like “A asserts B may perform X.” That’s capability-style reasoning, not identity issuance.
Trying to bolt VC-style delegation onto SPIFFE breaks both systems’ assumptions:
SPIFFE’s hierarchical trust model doesn’t mesh with the web-of-trust VC model.
Its short-lived SVIDs don’t persist long enough for meaningful delegation chains.
SPIRE doesn’t understand VC proofs (JSON-LD, linked data signatures).
You’d need a whole external policy and capability layer to make it work.
SPIFFE nails workload identity; VCs and capability systems handle delegation and contextual authority. Mixing them because “they both do identity” misses the point—they live at different layers of the trust stack.
A better model is to separate identity from capability:
SPIFFE/SPIRE handles who the agent is (short-lived, attested identity).
Capabilities / Macaroons / ZCAP-LD handle what that agent is allowed to do, and who delegated it.
OPA or Cedar enforces policy at runtime.
VCs come in only if you need cross-domain delegation (federated or multi-issuer trust).
So SPIFFE issues identities, and those identities mint or receive verifiable capabilities that describe explicit rights. You get composable, auditable delegation without breaking SPIFFE’s short-lived cert model or pretending it can do web-of-trust semantics.
Trying to bake delegation into SPIFFE itself is just reimplementing capability security badly.