Move Over Dijkstra: New Algorithm Just Rewrote 70 Years of Computer Science
Posted3 months agoActive3 months ago
medium.comTechstory
skepticalmixed
Debate
60/100
AlgorithmsComputer ScienceDijkstra's Algorithm
Key topics
Algorithms
Computer Science
Dijkstra's Algorithm
A new algorithm claims to have rewritten 70 years of computer science by improving upon Dijkstra's algorithm, but HN commenters are questioning its novelty and significance.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
35s
Peak period
17
0-6h
Avg / period
4.1
Comment distribution29 data points
Loading chart...
Based on 29 loaded comments
Key moments
- 01Story posted
Oct 3, 2025 at 8:19 AM EDT
3 months ago
Step 01 - 02First comment
Oct 3, 2025 at 8:20 AM EDT
35s after posting
Step 02 - 03Peak activity
17 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 5, 2025 at 2:24 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45462094Type: storyLast synced: 11/20/2025, 12:44:40 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Breaking the sorting barrier for directed single-source shortest paths - https://news.ycombinator.com/item?id=44812695 - Aug 2025 (51 comments)
It might optimize internal routing but getting this standardised across vendors etc. is not impossible, but probably takes a long time to standardise/govern etc.
The Thresh X2 [0] algorithm - for example - does away with the priority queue that is the bottleneck in Dijkstra. Instead, it iteratively runs a "label-correcting" routine over increasing search radii until the target is hit. I only learnt about this algorithm this year and can't find much about it online, although I've heard that it's sometimes used in videogames.
Then there's Contraction Hierarchies [1], used by many modern routing engines (such as OSRM [2] or GraphHopper [3]). This involves a slow pre-processing step in which nodes are put into a hierarchy of "importance", allowing a modified query-time routine which is orders of magnitude faster than Dijkstra. Recent work on this has also resulted in query-time routines that eliminate priority queues entirely. However, this assumes a fairly static road graph over which many requests are run.
In the linked algorithm, they seem to have an iteratively increasing radii and a routine which applies Bellman-Ford to identify "important" nodes. As I understand it, this decreases the number of nodes that need to be inserted into the priority queue.
[0] https://dlnext.acm.org/doi/10.1016/0167-6377%2887%2990053-8
[1] https://en.wikipedia.org/wiki/Contraction_hierarchies
[2] https://project-osrm.org/
[3] https://www.graphhopper.com/
Hope you don't mind but I took a little look at your posting history and saw this: https://news.ycombinator.com/item?id=41954120
I've been researching this lately, as we've recently implemented traffic patterns in our routing model and are just now working on live traffic updates. The easiest way to adapt our existing code looks like Customizable Contraction Hierarchies. There's a really nice review paper here: https://arxiv.org/abs/2502.10519 . The technique is to apply nested dissections to build a "metric-independent" hierarchy based purely on connectivity, which gives a decent quality of contraction regardless of transit times. Is that what you mean by decomposing the network into "cells"?
There are many similarities between this approach and customisable contraction hierarchies. The latter allows a particularly elegant query-time algorithm involving only a couple of linear sweeps, I suspect even in the many-many scenario.
I'm pretty confident they're heavily (if not fully) relying on LLM-generated text. Maybe they're drafting it themselves first and getting an LLM to refine. I found some recent articles by the same author which gave me the same reaction:
https://medium.com/@kanishks772/computer-scientists-just-bro...
and:
https://medium.com/@kanishks772/why-your-next-gps-might-use-...
They seem to have a process for grabbing a research paper, getting an LLM to summarise it, adding AI-generated images and pseudo code, and publishing it. There are lots of parallels in describing fundamental breakthroughs overturning decades of conventional wisdom. And it has the same clipped bullet lists with sound bite phrases, and slideshow style headings. It's extremely reminiscent of what happens when I ask Claude to give me a summary of something.
As a general note, I do think it best to take any article written by AI with a pinch of salt. Much like you should closely review any code they write. It's not at the level of a human expert, but it's trained to convince you that it is one.
It should make no difference it's written by AI or not. One should evaluate and criticize content without regard to who or what has written it. Conversely, just because a human writes something should not and does not make it superior.
People like you shamelessly attempt to suppress a broad spectrum of writing whenever they find something to disagree with, by blaming it on AI. That's what's evil about it.
> It should make no difference it's written by AI or not.
It absolutely should.
It is plain wrong to make an unsubstantiated and unproven accusation, and even if were true, it's irrelevant to the topic at hand. Moreover, it demonstrates an unjustified anti-AI bias which is a separate problem.
Some of us think that anything created by an AI should be labeled as such. Is that an inherently evil belief?
It is a little dumbfounding to compare the seeming emotionality/outrage of your posts with that which spurred it.
3 more comments available on Hacker News