AI Is the Natural Next Step in Making Computers More Accessible and Useful
Key topics
The article argues that AI is the next step in making computers more accessible and useful, sparking a debate among commenters about the role of AI in human-computer interaction and its potential impact on usability and accessibility.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
26m
Peak period
29
0-3h
Avg / period
5.8
Based on 52 loaded comments
Key moments
- 01Story posted
Aug 31, 2025 at 9:32 AM EDT
4 months ago
Step 01 - 02First comment
Aug 31, 2025 at 9:58 AM EDT
26m after posting
Step 02 - 03Peak activity
29 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 2, 2025 at 8:33 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Computers have been way more accessible in the DOS era, when travel agents could effortlessly handle console programs. In that era more people had some idea of the basics of computing, there was more diverse and open hardware.
Nowadays people know how to click to upload a video to YouTube, which means they are just sharecroppers. If the upload sequence is replaced by another middleman like "Open""AI" who will store and monetize your data, you are a sharecropper of "Open""AI" and YouTube.
I get that it's fashionable to do blue-sky AI evangelism with the breathless tone, but I also expect at least some depth.
LLMs have massive problems with externalities, but they have concrete and undeniable usefulness. So at the very least from that angle it's decidedly not the same as with crypto.
The hypsters will hype, that will always be the case.
Basically a total waste of time.
My preferred framing is that the computer is a lot more dangerous than we thought.
Allowing AI to make life or death decisions is just the latest example of their dangerous and uncaring nature.
https://gizmodo.com/trump-medicare-advantage-plan-artificial...
Current AI is like a 5 year old with a good memory.
I'm not too worried about losing control to something that that has trouble counting the "r's" in "strawberry".
I'm much more worried about people proposing to allow AI to make healthcare decisions.
The problem is the way *people* choose to apply it.
Is the person who first made a sharpened a stick responsible if some people choose to do the latter?
This sort of argument is as old as humanity and is anti-tool, anti-progress, anti-technology and ultimately anti-intelligence. Not to mention entirely futile at this point.
That won‘t work with language alone. Natural languages are ambiguous.
Just take double negatives. Some use them to create a positive, some to emphasize the negative.
Skynet, don’t kill no people!
I’ve always used the computers we evolved today and have never had the desire to seek a friendship where the other “being” was the computer. It’s always been a tool and writing full sentences/paragraphs to get back correct information doesn’t feel like the next evolution of computers where before I was dropping keywords and filtering myself.
The examples in the article are designed to support the points made but are not remotely accurate.
> Consider the difference:
> GUI era: “Open Photoshop → Create new file → Set dimensions to 1200x628 → Select rectangle tool → Draw rectangle from coordinates (0,0) to (1200,628) → Fill with color #3B5998…”
> AI era: “Create a Facebook cover image with our company logo and a modern blue background.”
The LLM prompt in the article looks simpler but really there is a lot of hidden prompt describing the output which is probably the same info as the GUI era example. Which blue will my LLM pick and how will it know without me telling. How will it know how big to scale the logo and to tilt it 3 degrees left? How does it even know what the “company logo” is?
LLMs might also collect the wrong data. Does my LLM use Facebook header image size specs from 2015 or 2022? Most “blogs” online might be how-to blogspam with outdated answers.
IMO LLMs are an attempt to filter blogspam from search and make knowledge gathering and scraping of walled-gardens easier, less how everyone will be using the computer.
You can probably make the opposite case, as Dijkstra did in his piece of "On the foolishness of "natural language programming".
"A short look at the history of mathematics shows how justified this challenge is. Greek mathematics got stuck because it remained a verbal, pictorial activity, Moslem "algebra", after a timid attempt at symbolism, died when it returned to the rhetoric style, and the modern civilized world could only emerge —for better or for worse— when Western Europe could free itself from the fetters of medieval scholasticism —a vain attempt at verbal precision!— thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole."
Computers, as machines, derive their power exactly by what they prohibit. They provide interfaces narrow enough, like modern mathematics, that make the expression of a whole lot of nonsense impossible which is what enables the automation of tasks in a correct manner. Going back to some sort of alchemy where you have to beg the computer with incantations to do things that may or may not be correct is actually going backwards in history. The fact that people think expressing themselves in a programming language as a burden, when the limitations are exactly what give it its power, says more about modern programming as a practice than anything else. As he jokes in the piece someone being glad they don't "need to" write SQL any more is like someone saying they avoided mathematical notation for the sake of clarity.
https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
In fact, I now tend to see it as a strong shibboleth from people who don't actually value the thing being "democratized" - computing, art, music, and who think in terms of "barrier to entry" instead of terms of understanding and appreciating.
In the end, this bizarre drive just ends up cheapening our enjoyment and interactions. We get shallow music, soulless art, and miserable computer programs, because there's no active intelligence involved in their creation that truly understands what's being created.
For every _actual_ revolution in human-machine interaction, there are roughly a million things that pundits and/or vendors say will definitely be the next revolution.
... No, those are not the primary way that virtually anyone interacts with computers.
OpenAI started with chat based AI but have since then realized "text only" doesn’t scale for serious business needs. We see openAI pivoting towards richer interfaces like "Canvas" that offers richer editing interface (GUI) with AI as an embedded collaborator.
There’s even news floating around that OpenAI is building a Google Docs / Microsoft Word competitor.
Now, take Microsoft. Microsoft owns some of the most serious business products ever like Office Suite. Microsoft however is working back towards a chatbot experience to an extent that they’ve even renamed their entire online office suite as Microsoft Co-pilot 365 - which makes little sense as a name for an office suite :)
But the bigger question is which approach is right? Maybe there isn’t a single right approach. Maybe that’s why Google is being Google and travelling both ways.
They’re building Gemini, Gemini Canvas as well as already owning an office suite and working towards integrating AI capabilities into their office editors.
We are living in interesting times!
Or to even design a good, humane interface.
I can’t imagine why I would want a system that could happily delete my backups when I asked it to reschedule an appointment. Having to constantly review and confirm everything it is about to do is annoying. Only to have it do the wrong thing anyway is also annoying at best.
Whatever you're smoking, must be really strong. Can i have some ? /s
Plenty of "creatives" I think are still going to be hands-on-mouse(stylus) in the coming decades.
I'm not sure that collaborative computing follows. Like when DropBox famously debuted and perhaps some were touting "cloud computing", Steve Jobs called it a "feature", not a platform. (Or words to that extent.) He was deflating the concept a bit too much in my opinion but perhaps the truth was somewhere in between.
It’s hard to make giant datacenters that require their own powerplants fit into that narrative. But I don’t know why I should prefer one narrative over another.
I can’t help hearing Karl Popper raging against historicism when I see people try to create a narrative and project it into the future as we move to some idea state.
What I find increasingly interesting is that 'democratizing' is being used in a way that is sure to make this word become a pejorative and I can't help but wonder whether it is intentional.
edit: added missing sentence fragment