Always Bet on Text (2014)
Key topics
The timeless debate around the supremacy of text as a medium for information transfer has been reignited by a 2014 post from Graydon Hoare, creator of Rust, which has resurfaced with remarkable relevance today. Commenters overwhelmingly agree that text is the superior format for conveying complex ideas, with many sharing personal anecdotes about the limitations of audio and video for knowledge transfer. While some poked fun at the text-centric views, others enthusiastically endorsed the idea, citing the ability to quickly comprehend written content and share ideas across space and time. As one commenter astutely observed, if all you think about is information best represented as text, it's no surprise that your examples will be text-heavy, highlighting the self-reinforcing nature of this preference.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
N/A
Peak period
66
0-6h
Avg / period
17.8
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 26, 2025 at 6:09 PM EST
6d ago
Step 01 - 02First comment
Dec 26, 2025 at 6:09 PM EST
0s after posting
Step 02 - 03Peak activity
66 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 29, 2025 at 12:28 PM EST
4d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Videos, podcasts... I have them transcribed because even though I like listening to music, podcasts are best written for speed of comprehension... (at least for me, I don't know about others).
Not sure why that is either - because I look at people extolling the virtues of podcasts, saying that they are able to multi task (eg. driving, walking, eat dinner), and still hear the message - which leaves me aghast
I had a 53 minute (each way) commute on the train, and I found it perfect for reading papers or learning skills - I was always amazed that the background noise would disappear and I could get lost in the text
Best study time ever.
To paraphrase the overused 'ol Sapir-Whorf, if all you think about is information that can be best represented as text, all your examples will be ones text wins at.
I can read the thoughts of a philosopher who lived on literally the other side of the world, several thousand years ago.
I'm unsure of, but would love to know, any other medium capable of that
My only counter would be - when you and I look at them do we get the same words (but I suppose you could also argue that for a book, poem, etc)
2021 (570 points, 339 comments) https://news.ycombinator.com/item?id=26164001
2015 (156 points, 69 comments) https://news.ycombinator.com/item?id=10284202
2014 (355 points, 196 comments) https://news.ycombinator.com/item?id=8451271
The 1% where something else is better?
Youtube videos that show you how to access hidden fasteners on things you want to take apart.
Not that I can't get absolutely anything open, but sometimes it's nice to be able to do so with minimal damage.
To the extent that that could work, I would imagine that I, personally, would be happy reading the textual description instead of watching the video, and for me, we'd now be even closer to text wins 100% of the time.
In other words, it's not that you _can't_ give excellent descriptions that would obviate the need for video, it's just that people _don't_, even, or perhaps even especially, when they think they do.
If someone writes text that creates a video that shows exactly how to get something apart, then _presumably_ they also watch the video to make sure it works.
So the video becomes a debugging tool for their instructions. Perhaps not as good as watching 100 people do it, but maybe even better in some ways.
So the video codec you describe could be a useful tool to help create more programmers.
https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...
You may be right, although, of course, current LLMs often do the right thing with "about 3/5ths of the way."
OTOH, as someone who has done CAD and schematic drawings by programming, I am not 100% convinced about the inevitability of unreadability.
In any case, though, the bar is not really whether any human can interpret the text, but whether the average human will interpret the text or video faster, and here, to your point, yes, the video probably still wins handily.
The closest analogy I can think of is animated math gifs like these:
https://en.wikipedia.org/wiki/User:LucasVB/Gallery
Which can be a huge aid in learning.
But this leads to another conundrum. Where do animated GIFs end and video begin? Because I could see a simple line-drawing style animated GIF being sufficient for most purposes.
You can store everything as a string; base64 for binary, JSON for data, HTML for layout, CSS for styling, SQL for queries... Nothing gets closer to the mythical silver-bullet that developers have been chasing since the birth of the industry.
The holy grail of programming has been staring us in the face for decades and we still keep inventing new data structures and complex tools to transfer data... All to save like 30% bandwidth; an advantage which is almost fully cancelled out anyway after you GZIP the base64 string which most HTTP servers do automatically anyway.
Same story with ProtoBuf. All this complexity is added to make everything binary. Towards what goal? To save 20% bandwidth, which, again is an advantage lost after GZIP... For the negligible added CPU cost of deserialization, you completely lose human readability.
You can still stream the base64 separately and reference it inside the JSON somehow like an attachment. The base64 string is much more versatile.
There's nothing special about "text" or binary here. You can absolutely put binary inside other binary; you use a symbol that doesn't appear inside the binary, much like you do for text.
You use a divider, like " is for json, and a prearranged way to avoid that symbol from appearing inside the inner binary (the same approach that works for text works here).
What do you think a zip file is? They're not storing compressed binary data as text, I can tell you that.
Using base64 means that you must encode and decode it, but binary data directly means that is unnecessary. (This is true whether or not it is compressed; if it is compressed then you must decompress it, but that is independent of whether or not you must decode base64.)
AFAIKT, binary format of a protobuf message is strictly to provide a strong forward/backward compatibility guarantee. If it's not for that, the text proto format and even the jaon format are both versatile, and commonly used as configuration language (i.e. when humans need to interact with the file).
You may think, "I don't need that," but once you've got more than a couple microservices, you'd be surprised how many headaches this type of compatibility issue can cause. You may think, "I can do that with json," but can you do exactly the same version of it across 4 or 5 different languages while maintaining a single source of truth for each message type's schema? At that point, you're just rebuilding Protobuf.
Afaik the only other tool that does what Protobuf does is Avro, though I haven't used it. I have tried to use json-schema for this, but that's not what it was made for. The schema evolution story is worse, and the codegen isn't as good.
My old 1995 MS thesis was written in Lotus Word Pro and the last I looked, there was nothing to read it. (I could try Wine, perhaps. Or I could quickly OCR it from paper.) Anyway, I wish it were plain text!
You could turn that around & say that, for the negligible human cost of using a tool to read the messages, your entire system becomes slower.
After all, as soon as you gzip your JSON, it ceases to be human-readable. Now you have to un-gzip it first. Piping a message through a command to read it is not actually such a big deal.
Many large scale systems are on the same camp as you as their text files flow around their batch processors like crazy, but there's absolutely no flexibility or transparency.
Json and or base64 are more targeted as either low volume or high latency systems. Once you hit a scale where optimizing a few bits straight saves a significant amount of money, self labeled fields are just out of question.
You know the rule, "pick 2 out of 3". For a CPU, converting "123" would be a pain in the arse if it had one. Oh, and hexadecimal is even worse BTW; octal is the most favorable case (among "common" bases).
Flexibility is a bit of a problem too - I think people generally walked back from Postel's law [1], and text-only protocols are big "customers" of it because of its extreme variability. When you end-up using regexps to filter inputs, your solution became a problem [2] [3]
30% more bandwidth is absolutely huge. I think it is representative of certain developers who have been spoiled with grotesquely overpowered machines and have no idea any idea of the value of bytes, bauds and CPU cycles. HTTP3 switched to binary for even less than that.
The argument that you can make up for text's increased size by compressing base64 is erroneous; one saves bandwidth and processing power on both sides if you can do away without compression. Also, with compressed base64 you've already lost the readability on the wire (or out of the wire since comms are usually encrypted anyway).
[1] https://en.wikipedia.org/wiki/Robustness_principle
[2] https://blog.codinghorror.com/regular-expressions-now-you-ha...
[3] https://en.wikipedia.org/wiki/ReDoS
Bret Victor's point is why is this not also the approach we use for other topics, like engineering? There are many people who do not have a strong symbolic intuition, and so being able to tap into their (and our) other intuitions is a very powerful tool to increase efficiency of communication. More and more, I have found myself in this alternate philosophy of education and knowledge transmission. There are certainly limits—and text isn't going anywhere, but I think there's still a lot more to discover and try.
[1] https://dynamicland.org/2014/The_Humane_Representation_of_Th...
Bret Victor's work involves a ton of really challenging heavy lifting. You walk away from a Bret Victor presentation inspired, but also intimidated by the work put in, and the work required to do anything similar. When you separate his ideas from the work he puts in to perfect the implementation and presentation, the ideas by themselves don't seem to do much.
Which doesn't mean they're bad ideas, but it might mean that anybody hoping to get the most out of them should understand the investment that is required to bring them to fruition, and people with less to invest should stick with other approaches.
It renders the term "text" effectively meaningless.
Mostly this is straightforwardly correct. Notes on a staff are a textual representation of music.
There are some features of musical notation that aren't usually part of linguistic writing:
- Musical notation is always done in tabular form - things that happen at the same time are vertically aligned. This is not unknown in writing, though it requires an unusual context.
- Relatedly, sometimes musical notation does the equivalent of modifying the value of a global variable - a new key signature or a dynamic notation ("pianissimo") takes effect everywhere and remains in effect until something else displaces it. In writing, I guess quotation marks have similar behavior.
- Musical notation sometimes relates two things that may be arbitrarily far apart from each other. (Consider a slur.) This is difficult to do in a 1-D stream of symbols.
> although, one can argue that musical notation is not able to adequately preserve some aspects of musical performance
Nothing new there; that's equally true of writing in relation to speech.
Otherwise it would be hard to include other types of obvious text, including completely mainstream ones such as Arabic. They are all strings of symbols intended for humans to read.
Then many a dictionary must be unreasonable [0]:
Musical notes do not form words, and therefore are not text. (And no, definition 1 does not refer to musical notes).[0] e.g. http://dict.org/bin/Dict?Form=Dict2&Database=*&Query=text
Amen to that. Even dynamic land has some major issues with GC pauses and performance issues.
I do try to put my money where my mouth is, so I've been contributing a lot to folk computer[1], but yeah, there's still a ton of open questions, and it's not as easy as he sometimes makes it look.
[1] https://folk.computer/
In terms of technical details, we just landed support for multithreaded task scheduling in the reactive database, so you can do something like When /someone/ wishes $::thisNode uses display /display/ with /...displayOpts/ { and have your rendering loop block the thread. Folk will automatically spin up a new thread when it detects that a thread is blocking, in order to keep processing the queue. Making everything multithreaded has made synchronizing rendering frames a lot tricker, but recently Omar (one of the head devs) made statements atomic, so there is atomic querying for statements that need it.
In terms of philosophy, Folk is much more focused on integration, and comes from the Unix philosophy of everything as text (which I still find amusingly ironic when the focus is also a new medium). The main scripting language is Tcl, which is sort of a child of Lisp and Bash. We intermix html, regex, js, C, and even some Haskell to get stuff done. Whatever happens to be the most effective ends up being what we use.
I'm glad that you mention that the main page is unhelpful, because I hadn't considered that. Do you have any suggestions on what would explain the project better?
[1] https://dynamicland.org/
Text is certainly not the best at all things and I especially get the idea that in pedagogy you might want other things in a feedback loop. The strength of text however is its versatility, especially in an age where text transformers are going through a renaissance. I think 90%+ of the time you want to default to text, use text as your source of truth, and then other mediums can be brought into play (perhaps as things you transform your text into) as the circumstances warrant.
> video's inferior to text for communicating ideas efficiently
Depends on the topic tbh. For example, YouTube has had an absolute explosion of car repair videos, precisely because video format works so well for visual operations. But yes, text is currently the best way to skim/revisit material. That's one reason I find Bret's website so intriguing, since he tries to introduce those navigation affordances into a video medium.
> The strength of text however is its versatility, especially in an age where text transformers are going through a renaissance. I think 90%+ of the time you want to default to text, use text as your source of truth, and then other mediums can be brought into play (perhaps as things you transform your text into) as the circumstances warrant.
Agree, though not because of text's intrinsic ability, but because its ecosystem stretches thousands of years. It's certainly the most pragmatic choice of 2025. But, I want to see just how far other mediums can go, and I think there's a lot of untapped potential!
I'd compare it's message to a "warning !" sign. It's there to make you stop and think about our computing space, after that it's up to you to act or not on how you perceive it.
That's totally wishy-washy, so it might not resonate, but after that I went to check more of what dynamicland is doing and sure enough they're doing things that are completely outside of the usual paradigm.
A more recent video explaining the concept in a more practical and down to earth framing: https://youtu.be/PixPSNRDNMU
(here again, reading the transcript won't nearly convey the point. Highly recommend watching it, even sped up if needed)
I think it's naïve to claim there's a singular best method to communicate. Text is great, especially since it is asynchronous. But even the OP works off of bad assumptions that are made about verbal language being natural and not being taught. But there's a simple fact, when near another person we strongly prefer to speak than write. And when we can mix modes we like to. There's an art to all this and I think wanting to have a singular mode is more a desire of simplicity than a desire to be optimal
Of course, not all graphs are equally information dense, and some are only used for decorative purposes more than actually conveying information. But in the general case, and especially when used well, graphs convey much more information at a glance than a short text description could.
No, graphs do not need come from text. I've frequently hand generated graphs as my means of recording experimental output. This is a common method when high precision is not needed (because your uncertainty level is the size of your markers). But that's true for graphs in general anyways.
Besides, plots aren't the only types of graphs. Try network graphs.
Besides, graphs aren't the only visual communication of data.
I'll give you an even more obvious one: CAD. Sure, you can do that in text... but it takes much more room to do and magnitudes more time to interpret. So much so that everyone is going to retranslate it into a picture. Hell, I'll draw on paper before even pulling up the software and that's not uncommon.
Fascinating example for me. I do CAD... using text! My only experience with it is programmatic in openscad. We check the visualization, but only on output of the final product. For me it's dramatically easier to work with. That may be a personal defect but it's also consistent. Underneath the rendering is always data, which is text, markup, but strings of fundamental data.
And in science it's not a stretch at all that numbers come first. I'll argue you're reaching. Today no one is drawing their numbers from experiments directly on a graph. They record them digitally. In textual form typically, and then render them visually to obtain generic understanding. But also there, in the end, your conclusions (per tradition) need to be point estimates with error bounds expressible in concise textual terms. You may obtain them from looking at images but the hard truth is numerical, digital, textual.
Part of this might be OpenSCAD specifically. It is CSG based, which is really not ideal, making it hard to add things like chamfers and fillets to your model. Most OpenSCAD models I come across for 3D printing have a crude look probably because this is so hard.
But part of it is just that text for most people just isn't the right representation in this case. (If you look at the relative usage of parametric CAD to textual CAD on sites for 3D models you will see that I'm right. Also, look at what approach commercial packages offer.)
Can you tell me more about the pipeline? Are you really starting from scratch by programming? You don't do any sketching first? I'm really having a hard time imagining doing anything reasonably complicated with this method. I'll admit that there are some advantages like guaranteeing bounds but there's so much that seems actually harder to do that way.
Like I said, it is contextually dependent. If you're recording with digital equipment to a computer, then yeah, it's just easier to record that way and dump into a plot. But if you don't have that then no. And again, even recording by hand it is still dependent.But some data is naturally image data (pictures?). Some data is naturally in other modalities (chemical reactions? Smell? Texture? Taste?). Yes, with digital recording equipment you can argue that this is all text but at that point I'd argue you're being facetious as everything is text by that definition.
Here I think you have a fundamental misunderstanding and are likely limiting yourself based on your experience.First off, not every measuring device is digital. So just that alone makes it down right false. And pretending all measurements are digital is just deceptive or naive.
Second, and I cannot stress this enough: *every single measurement is a proxy* to the thing you intend to measure.
You can't even measure a damn meter directly. You can measure distance through reference length that is an approximation of a standard distance (aka a ruler). You can measure distance through reference to an approximation of time and through the use of some known velocity, such as the speed of light through a given medium (approximating time, approximating c in the medium, approximating the medium). And so on.
What you cannot do is measure a meter directly.
And most of the things we're trying to measure, model, and approximate in modern science are far more abstract than a standard unit!
The idea that the ground truth is textual is ridiculous. That would only be true on the condition that the universe itself is running on a digital computer. Despite the universe being able to do computation, I see little reason to believe it is digital.
That does not mean that a CAD drawing itself is text. It is an artifact, produced from text. Using your argument, you could just as easily argue that all computer code is text, and I don't think that that is a useful redefinition of the word "text".
At the ripe old age of 30 something I see the last 50 years of scientific adjacent software, like latex, as a complete waste of time.
We should have spend that time building a better text notation instead of trying to reimplement the happy accidents that let people with parchment and quil communicate with each other.
Often you need a language in the first place to even be interested in the graph at all. Graphs are worth a thousand words if you are willing to throw out any data that
Is higher than 3D
Requires control flow or recursion to explain
Of course you can have diagrams systems that are languages e.g. Feynman Diagrams (a sort of DSL for quickly reading QM math). I would hold this up as a much greater achievement of human ingenuity than r/dataisbeautiful spam. But the differentiation here isn't between text and graphs, but between languages and glyphs.
Dynamicland is pushing the state-of-the-art here too. I think you'd really like their essay "The Library"[1].
[1] https://dynamicland.org/2019/The_Library.pdf
An example of where text falls short: if I said "be sure to rainbow your wrist when jumping in that passage," it wouldn't make any sense unless someone had seen an explanation. I suppose I could try to explain "when moving higher, make an upwards arc, and loop around at the end, to prevent jerking your wrist around when going back and forth," but even then that's still way too ambigious, since there's also a certain way you need to pivot your wrist so you can hold onto the upper chord as long as possible. It's just much easier to demonstrate and see if the student did it correctly.
What separates text from images is that text is symbolic while images are visceral or feelings based. In the same way, text comes in short when it comes to the feeling you get when seeing an image. Try to put in to text what you feel when you look at Norman Rockwell's Freedom of Speech or a crappy 0.5MB picture of your daughter taken on an iPhone 3. Hard isn't it? Visual and symbolic are not isomorphic systems.
Examples of symbolic systems like text are sheet music and Feynman diagrams. You would be hard pressed if you tried to convey even 2KB of sheet music in a book
- I want to learn how to climb rock walls
- I want to learn how to throw a baseball
- I want to learn how to do public speaking
- I want to learn how to play piano
- I want to make a fire in the woods
- I want to understand the emotional impact of war
- I want to be involved in my child's life
In text format no less
Tools that are mostly text or have text interfaces? Greatly improved by LLM.
So all of those rich multimedia and their players/editors really need to add text representations.
When I first started using Linux I used to make fun of people who were stuck on the command line, but now pretty much everything I do is a command line program (using NeoVim and tmux).
[1] Yes, obviously with updates but the point more or less still stands.
I think the obsession with text comes down to two factors: conflating binary data with closed standards and poor tooling support. Text implies a baseline level of acceptable mediocrity for both. Consider a CSV file will millions of base64 encoded columns and no column labels. That would really not be any friendlier than a binary file with a openly documented format and suitable editing tool, e.g. sqlite.
Maybe a lack of fundamental technical skills is another culprit, but binary files really aren't that scary.
Text is human readable writing (not necessarily ASCII). It is most certainly not just any old bytes the way you are saying.
It makes more sense to consider readability or comprehensibility of data in an output format; text makes sense for many kinds of data, but given a graph, I'd rather view it as a graph than as a readable text version.
And if you have a way to losslessly transform data between an efficient binary form, readable text, or some kind of image (or other format), that's the best of all.
I suppose open standards have slowly been winning with opus and AV1, but there's still so many forms of interactions that have proprietary or custom interfaces. It seems like anything that has a stable standard has to be at least 20 years old, lol.
Text is like a complexity funnel (analogous to a tokenizer) that everyone shares. Its utility is derived from its compression and its standardization.
If everyone used binary data with their own custom interpretation schema, it might work better for that narrow vertical, but it would not have the same utility for LLMs.
Indeed, there is a galactic civilization centered around binary communication: https://memory-alpha.fandom.com/wiki/Bynar
Our image models got good when we started making shared image and text embedding spaces. A picture is worth 1000 words, but 1000 words about millions of images are what allowed us to teach computers to see.
Is doing dozens of back and forth to explain what we actually want, while the model burns down inordinate amount of processing power at each turn, a model of efficiency or effectiveness ?
It might be convenient and allow for exploration, the cost might be worth it in some cases, but I wouldn't call it "effective".
This also invalidates the "efficiency" question, since the cost of doing those tasks without LLMs is infinity (i.e. you can pay as much as you want, a dolphin is never going to replace the LLM).
I'm a linguist, and I've worked in endangered languages and in minority languages (many of which will some day become endangered, in the sense of not having native speakers). The advantage of plain text (Unicode) formats for documenting such languages (as opposed to binary formats like Word used to be, or databases, or even PDFs) is that text formats are the only thing that will stanmd the test of time. The article by Steven Bird and Gary Simons "Seven Dimensions of Portability for Language Documentation and Description" was the seminal paper on this topic, published in 2002. I've given later conference talks on the topic, pointing out that we can still read grammars of Greek and Latin (and Sanskrit) written thousands of years ago. And while the group I led published our grammars in paper form via PDF, we wrote and archived them as XML documents, which (along with JSON) are probably as reproducible a structured format as you can get. I'm hoping that 2000 years from now, someone will find these documents both readable and valuable.
There is of course no replacement for some binary format when it comes to audio.
(By "binary" format I mean file formats that are not sequential and readily interpretable, whereas text files are interpretable once you know the encoding.)
You rightly mention Unicode, as before that there was a jungle of formats. I have some in UTF-16, some in SJIS, a ton in EUC, other were already utf-8, many don't have a BOM. I could try each encoding and see what works for each of the files (except on mobile...it's just a PITA to deal with that on mobile).
But in comparison there's a set of file I never had issues opening now and then: PDFs and jpegs. All the files that my scanner produced are still readable absolutely everywhere. Even with slight bitrot they're readable, and with the current OCR processes I could probably put it all back in text if ever needed.
If I had to archive more stuff now and can afford the space, I'd go for an image format without hesitation.
Half the mail I received from that period was in iso-2022 (a JIS variant), most of the rest was latin-1. I have an auto-generated mail from google plus(!) from 2015 in iso-2022-jp, I actually wonder when Google decided it was safe to fully move to utf-8.
I've worked with lots of minority languages in academic situations, but I've never run into anything that couldn't be encoded in Unicode. There's a procedure for adding characters (or blocks of characters) for characters or character sets that aren't already included. There are fewer and fewer of those. The main requirement is documentation.
On adding new characters to Unicode, as for any commitee there will be rejection and cases where going through the whole process is cumbersome/not worth it.
It's more commonly discussed in the CJK circles, it reminded me of the Wikipedia entry (unsurprisingly with no English equivalent)
https://ja.wikipedia.org/wiki/Wikipedia:%E8%A1%A8%E7%A4%BA%E...
> minority languages
More archaic that minority, but one language I had in mind was one using color coded strings and knots representation. There are latin alphabet mappings, so as long as we trust the translation record keeping per se works in Unicode, but if one wanted to keep the exact original writing it would obviously not work out in plain text. I imagined it's not an isolated instance, but I'm also way out of my depth on this one
https://en.wikipedia.org/wiki/Quipu
Similarly, cave paintings express the painting someone intended to make better than a textual description of it.
Anything below 3 is considered "partially illiterate".
I've been thinking about this a lot recently, as someone who cares about technical communication and making technical topics accessible to more people.
Maybe wannabe educators like myself should spend more time making content for TikTok or YouTube!
Rather than try to widely distribute and disseminate knowledge, it would be far wiser to capitalize on what will soon be a massive information asymmetry and widening intellectual inequality between the can-reads and the can't-reads.
A "dumb" example would be IKEA manuals that describe an assembly algorithm, I could imagine a lot of other situations where you want to convey very specific and technical information in a form that doesn't rely on a specific language (especially if languages aren't shared).
Color coding, shape standards etc. also go in that direction. The efficiency is just so big.
Minor nit: complex language (i.e. Zipf’s law) is the oldest and most stable communication technology.
Before text, we had oral story telling. It allowed us to communicate one generation’s knowledge to the next, and so on.
Arguably this is present elsewhere in the animal kingdom (orcas, elephants, etc.), but human language proves to be the most complex.
Side note: one of my favorite examples is from the Gunditjmara (a group of Aboriginal Australians) who recall a volcanic eruption from 30k+ years ago [0].
Written language (i.e. text) is unique, in that it allows information to pass across multiple generations, without a man-in-the-middle telephone-like game of storytelling.
But both are similar, text requires you to read, in your own voice, the thoughts of another. Storytelling requires you to hear a story, and then communicate it to others.
In either case, the person is required to retell the knowledge, either as an internal monologue or as an external broadcast.
Always bet on language.
[0]https://en.wikipedia.org/wiki/Budj_Bim
Human sensory system has an evolved processing ability for visual and audio content. A story can give different sensory data and feelings to different receivers. It is a low-fidelity transmission.
Try telling someone how an old folk song sounded or how some exotic fruit tasted, or how some wild flower smelled, or how some surreal game scene looked, using only text.
Many of us have been saying what the article says for a long time.
It is amazing what we can do with a few strings of symbols, thanks to the fact that we all learn to decode them almost for free.
The oldest and most important technology indeed.
That's completely false: Images were used for storytelling thousands of years before text (compare for instance the Lascaux paintings which are more than 17 000 years old, the Göbeklitepe sculptures and stone drawings (more than 12 000 years old), or the the more than 15 000 paintings of the City of Sefar (Algeria) which some estimate to date back as far as 20 000 years ago to the earliest text known in human history, Kish Tablet, Mesopotamia, around 3500 years old.
But I can't help feel that we try to jam everything into that format because that's what's already ubiquitous. Reminds me of how every hobby OS is a copy of some Unix/Posix system.
If we had a more general structured format would we say the opposite?
The image is of a monochrome logo with anti-aliased edges. Due to being a simple filled geometric shape, it could compress well with RLE, ZIP compression, or even predictors. It could even be represented as vector drawing commands (LineTo, CurveTo, etc...).
In a 1-bit-per-pixel format, a 20x20 image ends up as 400 bits (50 bytes).
It might be a good bet to bet on text, but it feels inefficient a lot of the time, especially in cases like this where all sorts of files are stored in JSON documents.
1: https://gist.github.com/simonw/007c628ceb84d0da0795b57af7b74...
2: https://simonwillison.net/2025/Dec/26/slop-acts-of-kindness/
PS: 2014
Reminds me of when HN thread comments about articles pertaining to the negative aspects of web advertising refer to the publisher's, e.g. a newspaper website's, use of web advertising, i.e., ad auctions and ad trackers, as a point of significance
Would arguments against text be more convincing if made using something other than text, i.e., would it make any difference
white on dark grey with phosphor green around? not really.
Reading “Mathematica - A secret world of intuition and curiosity” as well and a part stuck out in a section called The Language Trap. Example author gives is about for a recipe for making banana bread, that if you’re familiar with bananas, it’s obvious that you need to peel them before mashing. Bit of you haven’t seen a banana, you’d have no clue what to do. Does a recipe say peel a banana or should that be ignored? Questions like these are clear coming up more with AI and context, but it’s the same for humans. He ends that section saying most people prefer a video for cooking rather than a recipe.
Text will win, unless there is a lower effort option. The lower effort option does not need to be better, just easier.
I TOTALLY disagree on terminal being the best way
Even the text tablet shown is using 2D surface in its full ability - we need to strive to bring that as well
19 more comments available on Hacker News