Csrf Protection Without Tokens or Hidden Form Fields
Key topics
The debate rages on about the best way to protect against CSRF attacks, with a recent blog post sparking discussion on using Fetch Metadata as a top-level alternative to traditional token-based protection. Commenters weigh in on the role of OWASP guidelines, with some arguing that the organization's recommendations are often misinterpreted as a checkbox for compliance rather than a list of vulnerabilities to address. As one commenter astutely points out, having a specific CSRF defense in place isn't as important as not having CSRF vulnerabilities in the first place. The conversation highlights the nuances of web security and the ongoing quest for effective, practical protection measures.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3d
Peak period
61
72-84h
Avg / period
22.6
Based on 113 loaded comments
Key moments
- 01Story posted
Dec 22, 2025 at 12:38 AM EST
12 days ago
Step 01 - 02First comment
Dec 24, 2025 at 5:47 PM EST
3d after posting
Step 02 - 03Peak activity
61 comments in 72-84h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 27, 2025 at 10:59 AM EST
6d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Unfortunately OWASP rules the world. Not because it's the best way to protect your apps, but because the corporate overloads in infosec teams need to check the box with "Complies with OWASP Top 10"
Unfortunately, the customer purchasing your product doesn’t know this and (naturally) trusts their own internal experts over you. Especially given all their other suppliers are more than happy to state they’re certified!
It's possible for a server to treat them as case sensitive, but that seems like a bad idea.
> Since when are they case sensitive?
[...]
When I originally read it hours ago, I also read it as "...HTTP headers are case sensitive," (emphasis mine).
HTTP/2, headers are not unique if they only differ by casing, but they must be transmitted as lowercase.
HTTP/1.X, headers are insensitive to casing for reasons of comparison and transmission. [1]: https://datatracker.ietf.org/doc/html/rfc7540#section-8.1.2[2]: https://datatracker.ietf.org/doc/html/rfc2616#section-4.2
This was actually a mistake. If you look at the OWASP cheat sheet today you will see that Fetch Metadata is a top-level alternative to the traditional token-based protection.
I'm not sure I understand why, but the cheat sheet page was modified twice. First it entered the page with a top-level mention. Then someone slipped a revision that downgraded it to defense in depth without anyone noticing. It has now been reverted back to the original version.
Some details on what happened are in this other discussion from a couple of days ago: https://news.ycombinator.com/item?id=46347280.
https://scotthelme.co.uk/csrf-is-dead/
But I didn't know about the Sec-Fetch-Site header, good to know.
They give 2 reasons why SameSite cookies are only considered defense in depth:
----
> Lax enforcement provides reasonable defense in depth against CSRF attacks that rely on unsafe HTTP methods (like "POST"), but does not offer a robust defense against CSRF as a general category of attack:
> 1. Attackers can still pop up new windows or trigger top-level navigations in order to create a "same-site" request (as described in section 2.1), which is only a speedbump along the road to exploitation.
> 2. Features like "<link rel='prerender'>" [prerendering] can be exploited to create "same-site" requests without the risk of user detection.
> When possible, developers should use a session management mechanism such as that described in Section 8.8.2 to mitigate the risk of CSRF more completely.
----
But that doesn't make any sense to me. I think "the robust solution" should be to just be sure that you're only performing potential sensitive actions on POST or other mutable method requests, and always setting the SameSite attribute. If that is true, there is absolutely no vulnerability if the user is using a browser from the past seven years or so. The 2 points noted in the above section would only lead to a vulnerability if you're performing a sensitive state-changing action on a GET. So rather than tell developers to implement a complicated "session management mechanism", it seems like it would make a lot more sense to just say don't perform sensitive state changes on a GET.
Am I missing something here? Do I not understand the potential attack vectors laid out in the 2 bullet points?
> The URI in the List-Unsubscribe header MUST contain enough information to identify the mail recipient and the list from which the recipient is to be removed, so that the unsubscription process can complete automatically. Since there is no provision for extra POST arguments, any information about the message or recipient is encoded in the URI. In particular, one-click has no way to ask the user what address or from what list the user wishes to unsubscribe.
> The POST request MUST NOT include cookies, HTTP authorization, or any other context information. The unsubscribe operation is logically unrelated to any previous web activity, and context information could inappropriately link the unsubscribe to previous activity.
> The URI SHOULD include an opaque identifier or another hard-to-forge component in addition to, or instead of, the plaintext names of the list and the subscriber. The server handling the unsubscription SHOULD verify that the opaque or hard-to-forge component is valid. This will deter attacks in which a malicious party sends spam with List-Unsubscribe links for a victim list, with the intention of causing list unsubscriptions from the victim list as a side effect of users reporting the spam, or where the attacker does POSTs directly to the mail sender's unsubscription server.
> The mail sender needs to provide the infrastructure to handle POST requests to the specified URI in the List-Unsubscribe header, and to handle the unsubscribe requests that its mail will provoke.
But cross origin form posts are and have always been permitted, and are the main route by which CSRF vulnerabilities arise. Nothing on the client or server needs to be enabled to allow these form posts.
Furthermore, the approach detailed in the article simply has the server block requests if they are cross site/origin requests, so I'm not sure what the semantic difference is.
I feel like people are just parroting the OWASP "they're just defense in depth!" line without understanding what the actual underlying vulnerabilities are, namely:
1. If you're performing a sensitive operation on a GET, you're in trouble. But I think that is a bigger problem and you shouldn't do that.
2. If a user is on a particularly old browser, but these days SameSite support has been out on all major browsers for nearly a decade so I think that point is moot.
The problem I have with the "it's just defense in depth" line is people don't really understand how it protects against any underlying vulnerabilities. In that case, CSRF tokens add complexity without actually making you any safer.
I'd be happy to learn why my thinking is incorrect, i.e. where there's a vulnerability lurking that I'm not thinking of if you use SameSite Lax and only perform state changes on mutable methods.
SameSite or not is inconsequential to the check a backend does for a CSRF token in the POST.
I’m not being rude, what does it mean to unexpectedly carry cookies? That’s not what I understand the risk of CSRF is.
My understanding is that we want to ensure a POST came from our website and we do so with a double signed HMAC token that is present in the form AND the cookie, which is also tied to the session.
What on earth is unexpectedly carrying cookies?
The core idea behind the token-based defense is to prove that the origin server had access to the value in the first place such that it could have sent it if the browser didn't add it automatically.
I tend to agree that the inclusion of cookies in cross-site requests is the wrong default. Using same-site fixes the problem at the root.
The general recommendation I saw is to have two cookies. One without same-site for read operations, this allows to gracefully handle users navigating to your site. And a second same-site cookie for state-changing operations.
Just feels like all these http specs are super duck tapped together. I guess that is only way to ensure mass adoption for new devs and now vibe coders.
Cookies can still only be sent to the site that originally wrote them, and they can only be read by the originating site, and this was always the case. The problem, though, is that a Bad Guy site could submit a form post to Vulnerable Site, and originally the browser would still send any cookies of Vulnerable Site with the request. Your comment about "if the domain name is in the cookie value" doesn't change this and the problem still exists. "Yes you can configure the dns to bypass that" also doesn't make any sense in this context. The issue is that if a user is logged into Vulnerable Site, and can be somehow convinced to visit Bad Guy site, then Bad Guy site can then take an action as the logged user of Vulnerable Site, without the user's consent.
How would that help? This doesn't seem like a solution to the CSRF problem
It's a pretty cool attack chain, if there's an XSS on marketing.example.com it can be used to execute a CSRF on app.example.com! It could also be used with dangling subdomain takeover or if there's open subdomain registration.
[0] https://developer.mozilla.org/en-US/docs/Glossary/Site
That is, if you are using SameSite Lax and not performing state changes on GETs, there is no real attack vector, but like you say it means you need to be able to trust the security of all of your subdomains equally, which is rarely if ever the case.
I'm surprised browser vendors haven't thought of this. Like even SameSite: Strict will still send cookies when the request comes from a subdomain. Has there been any talk of adding something like a SameSite: SameOrigin or something like that?
The web platform is intricate, legacy, and critical. Websites by and large can’t and don’t break with browser updates, which makes all of these things like operating on the engine in flight.
For example, click through some of the multiple iterations of the Schemeful Same Site proposal linked from my blog.
Thing is, SameSite’s primary goal was not CSRF prevention, it was privacy. CSRF is what Fetch metadata is for.
That doesn't make any sense to me, can you explain? Cookies were only ever readable or writable by the site that created them, even before SameSite existed. Even with a CSRF vulnerability, the attacker could never read the response from the forged request. So it seems to me that SameSite fundamentally is more about preventing CSRF vulnerabilities - it actually doesn't do much (beyond that) in terms of privacy, unless I'm missing something.
CSRF is when you don't have the authentication token, but can force a user to make a request of your choosing that includes it.
That would be a terrible idea IMO. The insecurity was fundamentally introduced by cookies, which were always a hack. Those should be omitted, and then authorization methods should be designed to learn the lessons from the 70s and 80s, as CSRF is just the latest incarnation of the Confused Deputy:
https://en.wikipedia.org/wiki/Confused_deputy_problem
https://caniuse.com/mdn-http_headers_set-cookie_samesite_str...
This checks Scheme, Port and Origin to decide whether the request should be allowed or not.
So if you follow a link (e.g. from a Google search) to a site that uses SameSite=Strict cookies you will be treated as logged out on the first page that you see! You won't see your logged in state until you refresh that page.
I guess maybe it's for sites that are so SPA-pilled that even the login state isn't displayed until a fetch() request has fired somewhere?
Discussions about this often wind up with a lot of people saying "GET requests aren't supposed to change state!!!", which is true, but just because they're not supposed to doesn't mean there aren't some floating around in large applications, or that there aren't clever ways to abuse seemingly innocuous side effects from otherwise-stateless GET requests (maybe just visiting /posts/1337/?shared_by_user=12345 exposes some tiny detail about your account to user #12345, who can then use that as part of a multi-step attack). Setting the strict flag just closes the door on all of those possibilities in one go.
CSRF is mostly about causing side effects, not about access to information. And presumably just displaying your landing page should not have side effects, even when doing authenticated server side rendering. At least no side effects other than creating logs.
CSRF is about arbitrary clicks in emails and such that automagic your logged-in-session cookies to the server. If you require an extra field and compare it, you’re fine
https://news.ycombinator.com/item?id=46321651
e.g. serve .svg only when "Sec-Fetch-Dest: image" header is present.
I imagine there’s a fair amount of complexity that would need to be worked out, mostly because the browser doesn’t know the suborigin at the time it makes a request. So Sec-Fetch-Site and all the usual CORS logic would not be able to respect suborigins unless there was a pre-flight check for the browser to learn the suborigin. But this doesn’t seem insurmountable: a server using suborigins would know that request headers are sent as if the request were aimed at the primary origin, and there could be some CORS extensions to handle the case where the originating document has a suborigin.
A key component here is that we are trusting the user's browser to not be tampered with, as it is the browser that sets the Sec-Fetch-Site header and guarantees it has not been tampered with.
I wonder if that's a new thing ? Do we already rely on browsers being correct in their implementation for something equally fundamental ?
If my client is not a browser surely I can set whatever headers I want? Including setting it to same-origin?
Non-browser clients can be either blocked or even just given a pass, since CSRF is about tricking someone into clicking a link that then sends their Auth cookie along with the request. Either the non-browser request includes a valid cookie in the request and is allowed to mutate state, or it doesn't and nothing happens.
- Strict-Transport-Security - Content-Security-Policy - X-Frame-Options - X-Content-Type-Options - Referrer-Policy - Permissions-Policy - Cross-Origin-Embedder-Policy - Cross-Origin-Opener-Policy - Cross-Origin-Resource-Policy
On the other hand, I tried doing a Google search with javascript disabled today, and I learned that Google doesn't even allow this. (I also thought "maybe that's just something they try to pawn off on mobile browsers", but no, it's not allowed on desktop either.)
So the state of things for "how should web browsers work?" seems to be getting worse, not better.
Also, a new header like “sec-policy: foo-url” may be a clean way to move away that definitions from the app+web+proxy+cdn mesh to a fixed clear point.
"Origin policy was a proposal for a web platform mechanism that allows origins to set their origin-wide configuration in a central location, instead of using per-response HTTP headers." - https://github.com/WICG/origin-policy
But their status is "[On hold for now]" since, at least, three years ago.
Your attempt has similarities to the idea behind Checking Sec-Fetch-Site. Implementing that header is the same amount of work. But this header is exactly meant for this purpose, and referer is haunted with problems.
For production systems, a layered defense works best: use Sec-Fetch-Site as primary protection for modern browsers, with SameSite cookies as fallback, and traditional CSRF tokens for legacy clients. This way you get the UX benefits of tokenless CSRF for most users while maintaining security across the board.
The OWASP CSRF cheat sheet now recommends this defense-in-depth approach. It's especially valuable for APIs where token management adds significant complexity to client implementations.
And you can fall back to origin header, which has universal coverage. Then block anything else.
Also, owasp doesn't recommend it as defense in depth. It is a primary, standalone defense against CSRF.
https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Re...
What are those?
See https://words.filippo.io/csrf/
I just went looking for docs and it seems that 8.2 is not out yet
https://github.com/rails/rails/pull/56350/
Why? I can send any headers from a client I make.