A Generalized Algebraic Theory of Directed Equality
Posted4 months agoActive3 months ago
jacobneu.phdTechstory
calmmixed
Debate
40/100
Type TheoryProgramming LanguagesCategory Theory
Key topics
Type Theory
Programming Languages
Category Theory
The post shares a PhD thesis on a generalized algebraic theory of directed equality, sparking discussion on the relevance and implications of the research for programming languages and computation.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
4d
Peak period
14
90-96h
Avg / period
8.7
Comment distribution26 data points
Loading chart...
Based on 26 loaded comments
Key moments
- 01Story posted
Sep 18, 2025 at 1:22 PM EDT
4 months ago
Step 01 - 02First comment
Sep 22, 2025 at 3:55 AM EDT
4d after posting
Step 02 - 03Peak activity
14 comments in 90-96h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 22, 2025 at 6:07 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45292387Type: storyLast synced: 11/20/2025, 1:35:57 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> In our biggest launch to date, Google Registry is adding eight new extensions to the internet: .dad, .phd, .prof, .esq, .foo, .zip, .mov and .nexus, now publicly available at a base annual price through your registrar of choice.
> The .phd, .prof, and .esq TLDs are perfect for professionals who want to show off their credentials and expertise.
https://www.registry.google/announcements/introducing-8-new-...
Though it's notable [^1] that only in America does "esquire" imply any credentials or expertise
Tonnes of non-special boys in the UK get "esquire" as a suffix (maybe some go on the become attorneys, but unrelated :)
[1]: EDIT: on second thought - it's not "notable" at all, perhaps "interesting" was the word I sought..
http://virus.exe.zip/
Programming languages, like all languages, exist for communication between humans. A better programming language for a particular task is one that lowers friction needed to communicate ideas about particular programs.
The power of modern languages come from us quickly being able to understand programs. All languages rely on convention and assumption. The vast majority of bugs don't come from under-specification, but rather poor/miscommunication. Under-specification might explain why you wasted an hour reading source code to fix a minor problem, poor communication explains why we haven't yet automated every conceivably meaningful task computers are physically capable of already.
Reading mathematical sentences (types, basically) is like step one in maturing from baby to toddler in math.
To whom? Fully formalized mathematics is extremely dense and difficult, and few mathematicians actually practice it, especially for complex problems. Just because the symbols are theoretically unambiguous doesn't mean that any human can actually understand what they mean, and whether they accurately encode some less formal system you were hoping to study.
And this becomes much more problematic once you go from the realm of pure mathematics into other realms, like engineering or economics or anything else that is seeking to model real-world phenomena. A model can be perfectly self-consistent and still be a very bad and even useless model of a real-world thing. And it can be highly non-trivial to figure out to which extent and under what constraint a model actually matches the phenomenon you intended.
For examples of both kinds of issues, look at theoretical physics. Lots of major physics theories are only partly formalized, and rely on unproven or sometimes even known to be unsound mathematical foundations (like the Dirac delta at the time it was first introduced, or like renormalization in quantum physics). And, of course, there are still open questions on how well these theories actually model various physical phenomena (like GR and black hole curvature singularities).
This doesn't work for everyone though; some people find it easier to stay grounded in specific models.
Since Lean code isn't the same as natural language or even unobservable human intention, there might always be a mismatch between what the author of a Lean proof claims (or believes) it proves, and what it actually proves.
Like if you have some f:R->R, a:R in context and write ∀ϵ>0,∃δ>0 s.t. x: R, |x−a|<δ ⟹ |f(x)−f(a)|<ϵ, no one is confused about what you mean.
If you instead write ∃δ>0 s.t. ∀ϵ>0 x: R, |x−a|<δ ⟹ |f(x)−f(a)|<ϵ, you've written something different. Everyone in the field knows this, and actually it's even more obvious when you look at types in something like Lean: roughly ϵ -> (δ, h) vs. (δ, ϵ->h).
For a very simplistic example, say I ask for a trusted program that can add up all of my expenses. The person who writes the spec than defines this as `add_expenses xs = reduce - xs`. They then prove that write a program that matches this specification. The program will be "bug free" in the sense that it will perfectly match this spec. But it will certainly not be what I asked for - so from my point of view, it would be an extremely buggy program.
For a more subtle example, consider encryption algorithms. It's generally fairly easy to specify the encryption scheme, and then check that a program implements that exact algorithm. And yet, it's very likely that a program built only to such a spec will actually leak secrets through various side-channels in practice (either key-dependent timing or even key-dependent CPU usage, and others). Again, it would be fair both to say that the implementation is bug-free (it perfectly matches the spec) and that it is deeply buggy (it doesn't actually protect your secrets).
The system also will only allow that proof to be used for the proposition you have declared so even if you somehow declared the wrong thing and proved it without realising, that wouldn’t affect the consistency of the system at all.
> I feel that massive effort is being misdirected because of some fundamental misunderstandings
The misunderstanding here is with them and what they're looking at. The author doesn't discuss business problems, but lists this as a motivation in their "basic" summary:
> Category theory is a powerful and highly-general branch of mathematics, which is applied for all kinds of purposes. It has been formalized in several existing proof assistants, but only as a result of a ton of difficult work. The type system of (1,1)-directed type theory is specifically optimized for category theory; it manages to automate a lot of the difficult "bureaucracy" of formalizing category-theoretic results, so it doesn't require as much work.
i.e. it makes certain proofs easier.
That is manifestly false. Machine language and Brainfuck are both programming languages, and they are not designed for communication between humans. To the contrary, Brainfuck is deliberately designed to confuse humans.
Programming languages exist to describe processes with sufficient precision that they may be carried out by a machine. You might want a programming language to also serve the secondary purpose of communicating something to a human that reads it, but the vast majority of extant languages are not designed with this goal in mind.
> The power of modern languages come from us quickly being able to understand programs.
Again, no. The power of modern languages comes from providing higher-level abstractions that are closer to human thought processes. Modern programming languages allow you to think directly in terms of things like numbers, arrays, strings, etc. rather than forcing you to think of everything in terms of raw bits. There are many languages that are notorious for making code much easier to write than to read.
> The vast majority of bugs don't come from under-specification, but rather poor/miscommunication.
No, the vast majority of bugs come from a mismatch between the code and the programmer's mental model of what the code does, i.e. the process that the code describes. You can have bugs in code written by a single human. There is no possibility of miscommunication because there is no communication. There is just the code and the programmer's brain.
Why? Irreversibility is what prevents computers from theoretically not requiring energy for their computation.
I’d rather truly have reversible computation ;)
This is a brilliant idea and very well executed. I almost did not click the link because I would put myself into the 'Intermediate' (some familiarity) bucket and most blog posts are either too basic or too advanced for me. Glad I clicked.