The Issue of Anti-Cheat on Linux (2024)
Key topics
The article discusses the issue of anti-cheat software on Linux, highlighting its invasive nature and potential security risks, sparking a heated debate among commenters about the trade-offs between security, freedom, and fair play in gaming.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
79
Day 1
Avg / period
26.7
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 21, 2025 at 9:09 PM EDT
5 months ago
Step 01 - 02First comment
Aug 21, 2025 at 11:09 PM EDT
2h after posting
Step 02 - 03Peak activity
79 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 2, 2025 at 5:15 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
So to sum up. Valorant's anti-cheat, which the author sees something like an ideal solution:
- starts up and loads its kernel driver on boot.
- generates a persistent unique ID based on hardware serial numbers and associates this with my game account.
- stays active the entire time the system is up, whether I play the game or not. But don't worry, it only does some unspecified logging.
- is somehow not a spyware or data protection risk at all...
For anyone saying “just do server side,” no, it’s physically impossible to stop all cheating that way until we have internet faster than human perception.
Don't let perfect be the enemy of good.
The problem is that server-side occlusion is only a small piece of the puzzle. A naïve implementation means hundreds of thousands of raycasts per second, which doesn’t scale. Real engines rely on precomputed visibility sets, spatial partitioning, and still have to leak some data client-side for responsiveness.
Basically - the kernel level check is not laziness, but for unsolvable problems without huge compute costs or latency.
Hundreds of thousands of raycasts per second sounds doable to me, but couldn't you just use a GPU and some simplified level geometry? That ought to scale well enough. It's not free or perfect (knowing the position of a hand a cheat will be able to estimate where the head is anyway), but that's not the goal, right?
It's cat and mouse game.
BasicallyHomeless has made it his life mission to eradicate cheating in video games.
I think you already know the answer. Yes, it's bottlenecked by latency and jitter (of the laggiest player, no less), and in addition to that the maximum possible movement velocity makes it much much worse in fast paced games. It's been attempted a few times since at least late 90's, with predictable results.
In other words, complete server-side calculations are a fantasy. Besides, they won't even remotely make cheating impossible or even harder! Even complete hardware lockdown won't.
The naive raycast from player camera to other player would be fine for perf but may count partially visible as invisible, so its unacceptable. You'd have to raycast every pixel of the potentially visible player model to stay conservative. With movement + latency this expands to every pixel the player model could potentially occupy during your max latency period, and you need to consider the viewer moving too!
In practice this expands to a visibility test between two spheres with radius max_latency*max_movespeed + player_model_radius. Now, you could theoretically do a bunch of random raycasts between the spheres and get an answer that is right some of the time, but it would be a serious violation of our conservativeness criteria and the performance would get worse with more rays/better results. Also keep in mind that we need to do this for every single player/player pair a few dozen times per second, so it needs to be fast!
To do this, you need a dedicated data structure that maps volumes to other volumes visible from said volume. There are a few, and they are all non-trivial and/or slow to build well. (google for eg. potentially visible set, cell-portal graph + occlusion). You also trade performance for precision, and in practice you walls might become 'transparent' a bit too early. With all this being done, we can actually "do occlusion calculations server-side".
There's just one problem with this that I still don't know a solution for, namely precision. With fast players and imprecise conservative visibility, things you care about are going to count as visible pretty often, including stuff like enemies peeking from behind a corner (because they could have moved at full sprint for 100ms and the end of the wall is rounded away in your acceleration structure anyway) so all this complexity might not get you that much, particularly if your game is fast paced. You'd prevent some wallhacks but not the ones that really matter.
TLDR yes, it's actually hard and might not be good enough anyway
I've seen videos where cheats are particularly easy to detect if you are also cheating. I.e. when you have all the information, you can start to see players reacting to other players before they should be able to detect them. So it should be possible to build a repertoire of cheating examples and clean examples using high level players to catch a fair amount of cheating behavior. And while I understand that there are ways to mitigate this and its an arms race, the less obvious the cheats are, the less effective they are, almost by definition.
If someone is consistently reacting outside the range of normal human reaction times, they're cheating. If they randomize it enough to be within human range, well, mission accomplished, kind of.
If they're reacting to other players in impossible ways by avoiding them or aiming toward them before they can be seen with unusual precision or frequency, they're cheating.
A lot of complex game dynamics can be simplified to 2D vectors and it shouldn't be that computationally intensive to process.
The first is "never trust the client", i.e. realtime validation and having the server be the sole authority on the current game state. This is the straightforward solution to think of for programmers, but it's also practically infeasible due to latency, etc.
But what the server could do is a "trust but verify" approach: accept data from the clients when they submit it, but have some background processes that can analyze the data for anomalies and, if too much of it was detected, trigger a ban.
The only problem I see with this approach is that cheaters might react by repeatedly making new accounts and playing as them until the verification process has caught up and bans the account.
Cheating would be more obvious - as cheaters would have to start over with a beginner character every time - but it could still be annoying.
So the problem of ban evasion would become even more important. And I don't really see how a purely server-side solution could work there.
Don't worry, it's owned by Tencent.
Honestly I feel like you should only use kernel anticheat on a dedicated machine that's kept 100% separate from any of your personal data. That's a lot to ask of people, but you really shouldn't have anything you don't consider public data on the same hardware.
Probably the only workable solution is for windows to provide some kind of secure game mode where the game and only the game runs and can have windows attest nothing else is running. But that anti cheat has no access to the data in the real work OS which is currently not running. Ruins multi tasking, but assuming you can switch over fast enough it might not be too bad.
doesn't actually stop all cheaters.
We could have a better discussion around this if we recognize that failing to stop 100% of something isn't a prerequisite to rigorously evaluating the tradeoffs.
I'd argue the potential for abuse is a perfectly reasonable discussion to have, and doesn't have much bearing on the effectiveness of anticheat, but I understand that's not the point you are trying to make.
I didn't claim we should trust the company. Whether we can trust the anticheat maker is certainly part of the rigorous evaluation of the tradeoffs I mentioned. My point was that saying "it doesn't stop cheaters" is both incorrect and stifling to a more productive conversation, because it implies anticheat has no value and is therefore worth no risk.
As for me, if Gabe said "now you can opt your Steam Deck in to a trusted kernel we ship with anticheat and play PUBG," I'd probably do it. But that's because I, for better or worse, tend to trust Gabe. If Tencent were shipping it, I'd probably feel differently.
It is absolutely the case that there would be more cheating if we turned off the only partially effective systems. We know this because they are regularly stopping and banning people!
Cheating is a big draw to Windows for semi-pro gamers and mid streamers. What else is there to do except grind? Windows gives the illusion of "kernel level anti-cheat," which filters out the simplest ones, and fools most people some of the time.
As does Valorant and virtually every other first person shooter. The cheats aren't people flying around or nocliping, it's wallhacks and aim assists/bots.
I can move and reveal what's behind a corner a lot faster than a network roundtrip, so either the server needs to give some advance warning or you're going to see enemies pop into existence suddenly.
And computing if somebody is almost visible isn't trivial either. Level geometry can have narrow openings such as holes in a wall. Or what if somebody jumps?
And that's before getting into non visual information. It's not perfect, but you could still add a significant advantage by drawing the exact location of footsteps.
So yeah, (some) games try, but network latency means the client needs some information a wallhack can use, and the alternative: being killed by an enemy that was invisible is at least as frustrating as being killed by a cheater so the visibility estimate has to be generous.
For instance, a common cheat in Street Fighter 6 is to trigger a drive impact in response to the startup of a move that is unsafe to a drive impact. That is recognizing the opponent's animation and triggering an input. There's no part of that which cares where the game simulation is being done. In fact, this kind of cheating can only be detected statistically. And the cheats have tools to combat that by adding random triggering chances and delays. It's pretty easy to tune a cheat to be approximately as effective as a high-level player.
Kernel-level anticheat isn't a perfect solution, but there are people asking for it. It would make cheating a lot harder, at least.
Correct. Unfortunately, what you've just described is a gaming console rather than a PC. This problem fundamentally undermines the appeal of PC gaming in a significant way, imo.
Yes, game publishers are trying to turn PCs into a gaming console, which IMO will always be a futile effort, and is quite frankly annoying. I don't game on PC to have a locked down console-like experience.
Just embrace the PC for what it is and stop trying to turn it into a trusted execution platform with spyware and rootkits.
Look at BF6 - for all the secure boot and TPM required anti-cheat they stuffed it with, there were cheaters day 1, so why abuse your users when it's clearly ineffective anyway.
The game companies keep saying these things are necessary, yet they don't fully do the very thing they claim to do on the label.
Can't help but ask myself sometimes... why would users want to pay in the first place, for the content of someone who invests more money and leverage that some people see in their entire lives, in delivering user-hostile technical countermeasures that most of the time are ultimately futile?
What is the so valuable thing that one is supposed to get out of the work of someone who treats their audience this way, awesomely as their stuff might've been made? That's what doesn't make the most sense to me. But then I remember how most people aren't very intentional about most of their preferences and will accept whatever as long as it's served by an unaccountable industry into everyone's lives at the same time in a predictable manner, and I despair.
Of course the argument falls flat on multiple levels: It ignores other ways to prevent cheaters, like server-side detection or maybe developing a gameplay that is not based on channeling masses of anonymous strangers through the game world. It ignores that it doesn't actually solve the problem of cheaters. And it ignores that many games use anticheat for reasons that don't have to do with multiplayer at all, e.g. to keep players from bypassing in-game purchases.
Where with Anti-cheat and DRM only the 'good guys' get hit, since the 'bad guys' don't follow "the law" anyways.
https://www.dma-cheats.com/
A 14 year old who installs an autoclicker to mess with friends or randoms online I can get. But there are fully grown adults who dedicate their time and substantial amounts of money (whole second computer) just to win in online video games?
What's the motivation/justification for spending hundreds or even thousands of dollars on cheating hardware and software? Are these just super-rich people who have more money than sense?
No doubt there are various reasons, some more understandable than others. There are some fascinating historical cases, like the one explored in "The King of Kong" :
https://youtu.be/_4v15X8Px34
Which is well worth a watch, if you're curious.
b) should’ve specified this is the bigger problem. glad to see from the other comment bf6 is coming on-board, but VALORANT doesn’t and that’s probably the quintessential title for this.
The idea that we should allow arbitrary code execution at some point, then we claw back security by running mass surveillance on your PC is clearly insane.
The only way to go forward is what BF6 has done - ensure the PC is in a pristine state, and nothing bad was loaded in the kernel - which is ironically why their anticheats conflicted - they don't allow loading random crap in the kernel.
Not to mention, people who develop these invasive security modules don't have the expertise, resources or testing culture to muck about in the kernel to the degree they do.
As to how dangerous this actually got actually showcased by Crowdstrike last year.
Yes, and at that point, you may as well use Windows for that machine.
Wouldn't it be sufficient to simply have a minimal system installed on a separate partition or on a separate drive (internal or external). Boot that for gaming, and never give it the password for the encryption of your non-gaming volumes.
But anti-cheat hasn't been about blocking every possible way of cheating for some time now. It's been about making it as in convenient as possible, thus reducing the amount of cheaters.
Is the current fad of using kernel level anti-cheats what we want? hell nah.
The responsibility of keeping a multi-player session clean of cheaters, was previously shared between the developers and server owners. While today this responsibility has fallen mostly on developers (or rather game studios) since they want to own the whole experience.
Microsoft doesn't do any auditing besides "is this the most obvious malware?"
IIRC, even Microsoft was getting fed up with hands in the kernel after Cloudstrike so we may see it disappear eventually if Microsoft starts cracking down.
I oppose kernel-level anticheat because once it's in place, it will proliferate, even to single player games, just as it has in Windows.
In other words, once it's broadly supported, the number of games available to me (assuming I want to avoid kernel-level anticheat) will actually _shrink _.
But the alternative is cheaters in the game, which your point doesn’t really address. So for many it is a necessary evil, so to speak.
This is a reasonable stance because these things are fundamentally at odds and can't be reconciled on one machine. Either you have an open hackable system, where security comes from cryptography and transparency, or you have a locked down system where security comes from inaccessibility and obscurity.
1) There is a 100k bug-bounty on the anti-cheat: https://hackerone.com/riot?type=team
2) The anti-cheat is the game's entire reason for being. It is the main focus of the development and marketing. People buy Valorant for the anti-cheat; they are willing to accept a kernel driver as a trade off for fairer competition.
Fair competition is all well and good, but there are other ways to do it and I can already tell you that the war on kernel-level anti cheat is well under way. There are already people cheating in Valorant, and that will not slow down. If anything, it's going to get more common because cheaters and cheat creators are some of the most diligent people out there.
No? In which case, what practical spyware risk does a kernel level driver add that user mode software can’t do?
User mode software can spy on your clipboard, surreptitiously take screenshots, and take data out of your system. That spooks me enough that, if I don’t trust a software manufacturer, I don’t install it. Kernel mode makes no practical difference in my security posture.
Not on any properly secured Linux machine. But yes, it's generally a bad idea to install software you don't trust, a category that anticheats slot nicely into, given their resistantance to auditing and analysis.
- Creating a unique ID that is directly bound to hardware.
- Accessing the memory of any process, including browsers or messengers.
- Installing persistent background processes that are hidden from the rest of the system.
But I think that's the wrong question. Talking about the kernel driver is a distraction.
The abuse scenario that I think is most likely would be that the game and/or anticheat vendor uses the hardware ID for user profiling instead of just ban enforcement, and that the "logging" functionality is coopted to detect software or activities that aren't related to cheats at all, but are just competition of the vendor or can once against be used for profiling, etc.
None of that strictly requires a kernel driver. Most of that stuff could be easily done with a usermode daemon. But under normal circumstances, there is no way I'd install such a program. Only in the name of cheat prevention, suddenly it gets permissible to make users install that stuff if all they want to do is play some game.
Yes.
https://www.youtube.com/watch?v=RwzIq04vd0M
It seems to me that kernel-level anti-cheat is little more than a speed bump for determined cheaters.
Obviously, our personal priorities differ. That's fine, but yours don't invalidate my earlier point.
By the way, it's never just one determined cheater. Once discovered, circumvention techniques get shared, just as with mod chips and exploit scripts. It's only a matter of time before anyone willing to do a little reading or buy a little hardware can use them. And they do. (Often on alt accounts, with no fear of getting banned.)
In other words, any relief from game cheaters is bound to be temporary, while harm from spyware or exploit is irreparable to anyone who values the privacy of their data.
This is why kernel-level anti-cheat systems are so widely criticized. They might make sense on dedicated gaming machines, where the risks are low, but the situation is very different on general-purpose computers.
And you would rightly tell them to piss off and get out of your house, because that makes no sense. If you really wanted to torture the metaphor, you could I guess argue that they need full access to your house just in case you decide to pull some loaded dice out of the filing cabinet or something, but that's not really the important thing to me. The important thing is that, regardless of whether or not I trust the developer of the anti-cheat, the game just isn't that important.
Because somehow Proton is better than standing for actual GNU/Linux games.
So like IBM with OS/2 and Windows, studios keep ignoring Linux, and let Valve do whatever is needed, it is Valve's problem to sort out.
Is the memory of this kernel module protected from access from another kernel module ?
Which obviously causes all kinds of issues, and violates both freedoms 0 and 1 https://www.gnu.org/philosophy/free-sw.en.html
And they don't just remove those freedoms regarding the game, but for the entire system.
They do not, as long as you can disable the anti-cheat and reboot.
Even if the game itself doesn't grant me that freedom, my OS and drivers should not prevent me from attaching a debugger to the game without it noticing.
My computer, and the software on it, should obey me, and me alone. Never should they obey a developer's desire to restrict what I can and cannot do.
That is the ideological basis of the free software movement, and as you may have noticed, incompatible with client side anticheat.
I'd end up in court if I gave a random game developer root permissions on the same system that I use for client projects. But installing a kernel module is fine?
If the valorant module wanted, it could intercept anything from that point on. It could intercept me trying to uninstall it, and pretend it had been removed, while just hiding itself. It could intercept any debugging I'd be trying to do, and feed me false data.
That's why I don't use proprietary kernel modules, and never run proprietary code with root permissions.
And I shouldn't have to. Games don't need client side anticheat.
Why do even many single player games now ship with anti-cheat? Because they want to protect their lootboxes and microtransactions.
And even competitive games don't need client side anti-cheat. Most games are perfectly fine with a well-written server-side anticheat, and the ones that don't work fine if you host a private server with people you know.
No other part of IT would ever trust the client. Giving the client information they shouldn't have is an instant CVE, and so is relying on client-side validation.
But client-side anticheat is cheaper, and matchmaking increases engagement, so alternatives are dismissed.
I don't want to play with randoms. Even in mmorpgs I prefer finding a group via the zone chat, which also encourages finding a guild and making friendships, over playing with randoms. Especially if the matchmaking doesn't even take party roles into account.
So why should I break my clients' trust to give control of my system to someone I don't know to install software I don't want just so I can play a game with matchmaking just because the developer didn't want to pay for proper server-side anticheat?
Genshin's anticheat was used to install ransomware, ESEA's anticheat was used to install bitcoin miners on users machines, EA's anticheat was used to hack clients computers during a tournament, etc.
When not explicitly malicious, anticheat software is at best spyware that's spying on your computer use to identify cheating. People complain a ton about Microsoft recall storing screenshots of your computer locally being a security risk, and yet they're fine with a Chinese owned anticheat program taking screenshots of your computer and uploading them online. And even if the company isn't trying to use that info to spy on you, my understanding is that when you're a chinese company, you have to give full access of that data to the government.
With the ongoing/rising tensions between the US and China, I actually think there's a significant chance that we may see all Chinese owned anticheat programs banned in the US, which would be pretty significant since they own or partially own the majority (as far as I know).
Well, I don't think anyone reasonable should be telling others what they "should" be ok with, myself included (I made an exception this one time).
> Genshin's anticheat was used to install ransomware
You should tell the full story: Ransomware installed Genshin's anticheat because it was whitelisted by antivirus providers, it then used the anti-cheat to load itself deeper into the system. So not really a problem with Genshin's anticheat (indeed, users who had never played the game or even heard about it would be affected), but a problem with how antivirus providers dealt with it.
> ESEA's anticheat was used to install bitcoin miners
You should tell the full story: Someone compromised the supply-chain and snuck a miner into the anticheat binary. It was discovered immediately, and the fact that the miner was in the anticheat and not, say, a game loader, did nothing to hide it.
> People complain a ton about Microsoft recall storing screenshots of your computer locally being a security risk, and yet they're fine with a Chinese owned anticheat program taking screenshots of your computer and uploading them online
This is just a fallacy. Like saying "people voted for candidate A, but then they voted for candidate B!" Obviously, there can be multiple groups of people, and saying that "people" vaguely support X but not Y is usually a misunderstanding of the groupings involved.
The obvious explanation for this is"apparent" contradiction you point out is: Windows Recall is likely to be an on-by-default feature, and people don't really trust Microsoft not to "accidentally" enable it after an update. Also, Recall would likely be installed on all computers, not just gaming PCs. That's a big deal. A lot of people have multiple PCs, because they're cheap and ubiquitous these days. Maybe they're okay with recall and/or anticheat taking snapshots of their gaming PCs, but not the laptop they use to do their taxes, etc. The source of your confusion is likely the misunderstanding that most people, unlike the HN crowd, are practical, not ideological. They don't oppose anticheat on some abstract level, they care about the practical reality it brings to their life.
Another element is that most people, at least in the US, have "spy fatigue". They figure, hey, the US government spies on me, the five eyes spies on me, Russia and China spy on me, what does it matter?
Software with that level of access having a supply chain compromise is not an argument in its defense.
The distinction doesn't really matter. The claim wasn't that the ransomware authors exploited deficiencies in the anticheat design, just that the anticheat was used to install the ransomware, which it was.
You can do this on macOS too, by the way. XNU is open-source.
We can run tasks on them that only produces valid output if the boot chains is verified.
How would one get the modified XNU past the verified-boot process? Turn off verified boot?
It's much harder to cheat if the game isn't running on your computer.
The ultimate "anti-cheat" is playing on some trusted party's computer. That can be a cloud machine, but I think today a game console would work just as well, turn that closed nature into an actual user-facing benefit. Console manufacturers seem focused on their traditional niche of controller couch gaming and not on appealing to high-FPS keyboard-and-mouse gamers, though.
It doesn't even seem very hard to implement, steam already has the ability to stream games, they could add this pretty easily as an option for any game (although there is the concern of the extra cost of running the servers).
That shouldn't be a problem if all players, regardless of the OS, are required to use the same cloud service with similar latency.
Alas, I'd like to believe we could be in an era of "hey, not a problem, just have a dedicated gaming machine," but that too is difficult.
139 more comments available on Hacker News