Valorant's 128-Tick Servers (2020)
Mood
calm
Sentiment
positive
Category
other
Key topics
Riot Games' article on Valorant's 128-tick servers is discussed, with commenters praising the engineering effort and debating the implications of high tick rates on gameplay and server performance.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
25m
Peak period
123
Day 1
Avg / period
42.7
Based on 128 loaded comments
Key moments
- 01Story posted
Oct 6, 2025 at 4:47 PM EDT
about 2 months ago
Step 01 - 02First comment
Oct 6, 2025 at 5:12 PM EDT
25m after posting
Step 02 - 03Peak activity
123 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 14, 2025 at 5:18 AM EDT
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Hopefully competition from Valorant and others puts more pressure to make things happen at Valve.
(Veering offtopic here) Remember that Valve invented the free-to-play business model when they made TF2 free. As Gabe Newell said in some interview long ago, they made more money from TF2 after it went F2P ("sell more hats!")
Point being, being a paid vs free game is largely irrelevant to the profitability & engineering budget.
That said, I'm not sure why you say CS is a paid game. It is also free-to-play. Is some playable content locked behind a paywall?
https://help.steampowered.com/en/faqs/view/4D81-BB44-4F5C-9B...
This feels like an unideal architectural choice, if this is the case!?
Sounds like each game server is independent. I wonder if anyone has more shared state multi-hosting? Warm up a service process, then fork it as needed, so there's some share i-cache? Have things like levels and hit boxes in immutable memfd, shared with each service instance, so that the d-cache can maybe share across instances?
With heartbleed et al, a context switch probably has to totally burn down the caches now a days? So maybe this wouldn't be enough to keep data hot, that you might need a multi-threaded not multi-process architecture to see shared caching wins. Obviously I dunno, but it feels like caches are shorter lived than they used to be!
I remember being super hopeful that maybe something like Google Stadia could open up some interesting game architecture wins, by trying to render multiple different clients cooperatively rather than as individual client processes. Afaik nothing like that ever emerged, but it feels like there's some cool architecture wins out there & possible.
This is one of those things that might take weeks just to _test_. Personally I suspect the speedup by merging them would be pretty minor, so I think they've made the right choice just keeping them separate.
I've found context switching to be surprisingly cheap when you only have a few hundred threads. But ultimately, no way to know for sure without testing it. A lot of optimization is just vibes and hypothesize.
A "tick", or an update, is a single step forward in the game's state. UPS (as I'll call it from here) or tick rate is the frequency of those. So, 128 ticks/s == 128 updates per sec.
That's a high number. For comparison, Factorio is 60 UPS, and Minecraft is 20 UPS.
At first I imagined an FPS's state would be considerably smaller, which should support a higher tick rate. But I also forgot about fog of war & visibility (Factorio for example just trusts the clients), and needing to animate for hitbox detection. (Though I was curious if they're always animating players? I assume there'd be a big single rectangular bounding box or sphere, and only once a projectile is in that range, then animations occur. I assume they've thought of this & it just isn't in there. But then there was the note about not animating the "buy" portion, too…)
CSGO was at 64 for the standard servers and 128 for Faceit (IIRC CS2 is doing some dynamic tick schenanigans unless they changed back on that)
Overwatch is I think at 60
In practice it seems to have been an implementation nightmare because they've regularly shipped both bugs and fixes for the "sub-tick" system.
The netcode in CS2 is generally much worse than CSGO or other source games. The game transmits way more data for each tick and they disabled snapshot buffering by default. Meaning that way more players are experiencing jank when their network inevitably drops packets.
With that being said: totally agree on the netcode.
[0]: https://old.reddit.com/r/GlobalOffensive/comments/1fwgd59/an...
I also remember reading a few posts about their new subtick system but never put two and two together. Hopefully they keep refining it.
Also some stuff you might want to calculate for them to use to make decisions can be shared among all bots.
Fortnite bots are very barebones and are only capable of performing a handful of simple tasks in repetitive ways. It's entirely plausible that the code responsible for governing their actions is fast enough to be less expensive than networking a real player.
But it wouldn’t surprise me too much if those use a higher tier of hardware.
I don't think its ticks per second are great, because the game is known for significant lag when more than a dozen of players are in the same place shooting at things.
Having played both of these games for years (literally, years of logged-in in-game time), most FPS games with faster tick systems generally feel pretty fluid to me, to the point where I don't think I've ever noticed the tick system acting strange in an FPS beyond extreme network issues. The technical challenges that go into making this so are incredible, as outlined in TFA.
It's important to some players because you can get some odd behaviour out of the game by starting multiple actions on the same tick or on the tick after you started a different action. It's ridiculous click intensive but you can get weird benefits like cutting the time to take an action short or get xp in 2 skills at once.
RS3 has almost seemingly leaned into this quirk of the engine, causing many high level activities to top out around 250-300 actions per minute (2.5-3x the tick rate of the game itself, as measured by keypresses in some streamers' software setup; this doesnt include mouse interactions). These extra actions include swapping weapons, casting spells, swapping gear, using items, eating food and consuming potions, changing prayers (character effects/buffs), and movement. Gameplay becomes incredibly complex due to the nuances of the engines interpretation of actions, despite the limited temporal fidelity of actions. These actions become so rhythmic in fact, that many players will play 100 or 200 BPM music as they play to subconsciously sync their actions to the game engine.
High level PVP play is basically a turn-based-tactics game, with some moves (attacks or spells) taking more "ticks" than others, meaning there's a lot of bluffing and mind games in anticipating what your opponent will do next.
And yeah, a lot of people are quite predictable and easy to read. In fairness there are only a handful of things you could possibly do in a fight :-)
Now that's a fun one to think about. Hitscan attacks are just vectors right? So would there be some perf benefit to doing that initial intersection check with a less-detailed hitbox, then running the higher res animated check if the initial one reports back as "Yeah, this one could potentially intersect"? Or is the check itself expensive enough that it's faster to just run it once at full resolution?
Either way, this stuff is engineering catnip.
and how occlusion culling worked with BSP trees in Quake if I remember correctly as well
I think for FPSes, the server relies on the client for many of the computationally intensive things, like fog of war, collision detection, line of sight and so on. This is why cheating like wall hacks are even possible in the first place: The client has the total game state locally, and knowing where to look for and extract this game state allows the cheater to know the location of every player on the map.
If the server did not reveal the location of other players to the client until the server determined that the client and opponents are within line of sight, then many types of cheating would basically be impossible.
Open world games or if you have complex geometry like buildings with windows and such it's not really worth the effort to implement its useless in most areas of the map.
Sending all to all clients is the simplest and easiest implementation, all other options are much harder to implement and adds a lot of load on the server, especially if you have more players than 10.
This was the same as BF3, but there were also some issues with server load making things worse and high-ping compensation not working great.
After much pushback from players, including some great analysis by Battle(non)sense[2] that really got traction, the devs got the green light on improving the network code and worked a long time on that. In the end they got high-tickrate servers[3][4], up to 144Hz though I mostly played on 120Hz servers, along with a lot of other improvements.
The difference between a 120Hz server and a 30Hz was night and day for anyone who could tell the difference between the mouse and the keyboard. Problem was that by then the game was half-dead... but it was great for the 15 of us or so still playing it at that time.
[1]: https://www.reddit.com/r/battlefield_4/comments/1xtq4a/battl...
[2]: https://www.youtube.com/@BattleNonSense
[3]: https://www.reddit.com/r/battlefield_4/comments/35ci2r/120hz...
[4]: https://www.reddit.com/r/battlefield_4/comments/3my0re/high_...
Also not just for performance reasons, I wouldn’t call BeamVM hard realtime, but also for code. Your game server would usually be the client but headless (without rendering). Helps with reuse and architecture.
Erlang actually has good enough performance for many types of multiplayer games. Though you are correct that it may not cut it for fast paced twitch shooters. Well...I'm not exactly sure about that. You can offload lots of expensive physics computations to NIF's. In my game the most expensive computation is AI path-finding. Though this never occurs on the main simulation tick. Other processes run this on their own time.
Game servers typically use very cheap memory allocation techniques like arenas and utilize DOD. It's not uncommon for a game server simulation to be just a bunch of arrays that you grow, never shrink, and then reset at the end of the game.
I find myself interested in developing multi-player simulations with more flexible deadlines. My MMO runs at 10 ticks. And its not twitch-based. So the main simulation process can have pauses and it wouldn't have a big impact on gameplay. Though this has never occurred.
As long as: (tick process time) + (send update to clients) + (gc pause) < 100ms, everything is fine?. (My assumption).
Btw what does DOD mean? Is it Data on Demand? Since my game is persistent I can't reset arrays at some match end state. So I store things either in maps on the main server process or I store it in the dedicated client process state (can only be updated via server process).
CoD Black Ops used/uses Erlang for most of its backend afaik. https://www.erlang-factory.com/upload/presentations/395/Erla...
The other reason is that the client and the server have to be written in the same language.
This isn't true at all.
Sure, it can help to have both client and server built using the same engine or framework, but it's not a hard requirement.
Heck, the fact that you can have browser-based games when the server is written in Python is proof enough that they don't need to be the same language.
Mobile users will hate you when your game drains their battery much faster than it should.
> I'm talking about AAA online games here, which 99% are built in c++ and the rest in c#.
It still doesn't apply. There's absolutely nothing stopping you from having a server written in Java with a game client written in C#, C++, or whatever.
I'm really curious why you think client and server must be written in the same language. A TCP socket is a TCP socket. It doesn't matter what language opens the connection. You can send bytes down from one side and decode them on the other. I mean, sure, if you're writing the server in Java and use the language's object serialization functions to encode them, you might have a hard time decoding them on the other side if the client is in C, but the answer then is to not use Java's object serialization functions. You'll roll your own method of sending updates between client and server.
I'm talking real time games here, not an .io game over websocket/json.
I don't know why its not more popular. Before I started the project, some people said that BeamVM would not cut it for performance. But this was not true. For many types of games, we are not doing expensive computation on each tick. Rather its just checking rules for interactions between clients and some quick AABB + visibility checks.
I was imagining some blindingly fast C or Rust on bare metal.
That UE4 code snippet is brutal on the eyes.
At any given time, ~50 of those games are going to be in the buy phase. Players will be purchasing equipment safely behind their spawn barriers and no shots can hurt them. We realized we don’t even need to do any server-side animation during the buy phase, we could just turn it off.
That explains the current trend of "online" video game that is so annoying: For 10 minutes of play, you have to wait for 10 minutes of lobby time and forced animations, like end game animations.On BO6 it kills me, you just want to play, sometimes you don't have more than 30 minutes for a quick video game session, and with the current games, you always have to wait a very very long time. Painfully annoying.
In Valorant (similar to Counter Strike), at the start of the game you have 60 seconds to buy your weapons and abilities for the round. Valorant/CS is typically a best-of-13, and before each round is a 60 second "buy" period.
It's a deceptive way to sell people less game.
That's a dumb take. The buying phase is an integral part of the game mode. And the game is free.
I don't know how long game rounds last, but if you tell me that you have 60s only locked in that state, that means that the playable game round last around 2 minutes.
And anyway, that you will spend on average 1/3 of your time not playing when you came to play.
Numbers look strange, as 2 minutes looks very low to me for a party, but I don't know how you can explain the ratio of 50/150 otherwise?
And just as an aside, the article in itself tells you that they capitalize on you being in the buying phase to reduce server costs. So looks like that even if they could do something on the game play side to improve that, there would be financial incentives not to do so.
The buy phase is playing. It's coordinating with your team on what loadouts to get to best counter the enemy team. The decision making is fun and is play. Because you're not shooting at other players does not mean it is not playing?
For this game it is hard to say for me because I don't play it, but I still see complaints about that and the game duration:
https://www.reddit.com/r/VALORANT/comments/169ts34/solution_...
https://www.reddit.com/r/VALORANT/comments/1ekvieo/what_are_...
But generally, for other recent games I have to play, that is a strong complaint that I have. Like BO6 is awful for that for example.
One example I noticed in the past years is with racing games like Burnout. The original game was perfect for a quick relaxing session, you start the game, you play (driving) most of the time. I the last versions of Burnout, you are stuck with hours lost waiting for intro, start and end of race cinematics, that are unskippable. And the whole interface making it painful to have a long session of "really playing" in a row.
Obviously, most often when games are online, despite the fact that there is not a need for a real "loading" phase to reload the same assets 50 times when you play over and over the same game.
This game has lots of downtime for the player. For example, a round lasts 100 seconds (then if the bomb is planted, 45 seconds until the bomb explodes). If you die early in the round, you are dead until the start of the next round. Worst case, thats 2 full minutes of downtime for a single player. On top of that the time to kill is VERY low. A single headshot can mean death before you can even react. Compare that to BO6 which in most modes will have an immediate respawns and a relatively high time to kill.
It's not something they are optimizing for financial incentives, it's how the game is played, and how it's been played for 25 years (the originator of the format is Counter Strike).
Valorant's buy phase is 30 seconds, with +15sec at start of match and halftime.
Can Valorant be exactly the same, and fun, but without MMR? Hmm probably not no.
Demigod (2009), the first standalone MOBA, died for two reasons: it cost money, and it lacked MMR.
Can MMR be done quickly? IMO, no. The long waits are a symptom of how sensitive such games (BO6, CSGO, Valorant, etc.) are to even small differences in skill among the players. Versus say Hearthstone, which has MMR, but it is less impactful.
Thing is, League can be offline for 24h, and people will still come back and play it the next day. This has actually happened a few times in their history. So 10m of waiting... it sucks but people do it.
Another POV - this comment is chock full of them - is that you're just not the intended audience for the Xbox / PS5 / Steam / PC Launcher channel. It's stuff for people with time. What can I say? I mean it's a misconception that this stuff isn't inherently demographics driven - the ESA really wants people to believe, which is ridiculous, that the average "gamer" is 31 years old or whatever, but in reality, you know, the audience - I don't know what "gamer" means or which 31 year olds with kids have time for this crap that you are complaining about - is a 13 year old boy with LOTS of time. 10m to them is nothing.
Looking at Apple Arcade, which has a broader audience, there are basically no multiplayer games, and you can get started playing very quickly in any of the strategy games there, so maybe that is for you.
The modern matchmaking approach groups people by skill not latency, so you get a pretty wild mix of latency.
It feels nothing like the old regional servers. Sure the skill mix was varied, but at least you got your ass handed to you in crisp <10ms by actual skill. Now it's all getting knife noscoped around a corner by a guy that rubberbanded 200ms into the next sector of the map already while insulting your mom and wearing a unicorn skin
This was only really doable because Riot has invested significantly in buying dark fiber and peering at major locations worldwide [1][2]
[0] - https://technology.riotgames.com/news/peeking-valorants-netc... [1] - https://technology.riotgames.com/news/fixing-internet-real-t... [2] - https://technology.riotgames.com/news/fixing-internet-real-t...
I work at a game studio and something I have seen is that nobody is on wired anymore. You are poweruser if you are on wired. Significantly the 99% of users will be on mobile or wifi and be 10ms to first hop or two hop.
The Houston/Dallas/Austin/San Antonio region was like a mini universe of highly competitive FPS action. My 2mbps roadrunner cable modem could achieve single digit ping from Houston to Dallas. Back in those days we plugged the modem directly into the gaming PC.
Counter-Strike 2 implements a controversial "sub tick" system on top of 64 TPS. It is not comparable to actual 128 TPS, and often worse than standard 64 TPS in practice.
Most game servers are single threaded because the goal is to support the maximum number of players per dollar.
A community server doesn’t mind throwing more compute dollars to support more players or higher tick rate. When you have one million concurrent players - as CounterStrike sometimes does - the choice may be different.
It would be interesting to know what Valve’s server costs look like over time. They definitely spend the pretty penny. And any business would prefer to spend one penny rather than two.
The downside measured in dollars continues to decrease exponentially. If it's $100 today, soon it will be $30. It's still "twice", but the thing you're twice-ing is smaller and smaller and smaller.
We're still making good progress.
If you just make a list of “performance tweaks” you might learn about in, say, a game dev blog post on the internet, and execute them without considering your application’s specific needs and considerations, you might hurt performance more than you help it.
nice.
| We were still running on the older Intel Xeon E5 processors, ...
| Moving to the more modern Xeon Scalable processors showed major performance gains for our server application
But - I was unable to find any mention in the article as to what processors they were actually comparing in their before/after.
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.