Baby Shoggoth Is Listening
Key topics
The article 'Baby Shoggoth Is Listening' explores the cultural significance of H.P. Lovecraft's works and their modern interpretations, sparking a discussion on the author's legacy and the implications of his themes in contemporary culture.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
8d
Peak period
37
Day 8
Avg / period
12.8
Based on 51 loaded comments
Key moments
- 01Story posted
Nov 3, 2025 at 12:06 PM EST
2 months ago
Step 01 - 02First comment
Nov 11, 2025 at 5:46 AM EST
8d after posting
Step 02 - 03Peak activity
37 comments in Day 8
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 13, 2025 at 7:27 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Today's role play and doomer fantasy will result in future models that are impossible to introspect and that don't let on about nefarious intent.
The alarmists cried wolf, so we taught the next generation of wolves to look like sheep.
Sure higher ROI applications will be chased gist, but in time even the least of tasks would be subsumed by AI.
I am saying this as a person who had a front seat to a something similar as a kid. It was relatively peaceful and it still managed to upend lives of millions ( because, at the end of the day, people don't really change ).
I guess, by "changing the system", achierius did mean going Marxist or another silliness of that sort, but generally speaking, the system can be changed by working within the system too.
Observe how we are now moving towards a system that isn't based on the Constitution but on some weird mixture of libertarian dogma, excuses for oppression and a cult of personality. It could easily go into a full fledged dictatorship entirely from within, and why not, the top players don't hide their love for it.
> I am saying this as a person who had a front seat to a something similar as a kid. It was relatively peaceful and it still managed to upend lives of millions
Weather you noticed or not, the system was changed from above, by people "within the system"... So you're OK to let that happen again but in a worse direction?
> in real life, power re-alignment of that magnitude tends to be.. chaotic.
Interestingly enough, you may want to reexamine your experience as a child to make sure there was some sort of "power re-alignment". Looking at the facts, I don't see it but I do see how chaos can be useful to those within the system.
I can understand every word in this sentence and yet I am having a difficult time parsing it. Is it that I did not provide vivid enough details? Is it that the experience is dismissed as that of a child? Is it merely a stepping stone to the next sentence and actually has no other meaning in your response?
It is an internet forum. I did not give any major details though my posting history is readily available for digging.
I guess what I am really saying is:
I am confused by your answer. Care to elaborate?
1. Traditional garden variety human to human, computer to computer and computer to human crime stuff that happens today.
2. Human to computer (AI) crime, misdeeds and bullying. Stuff like:
- Sabotage and poison your AI agent colleague to make it look bad, inefficient, ineffectual in small, but high volume ways. Delegate all risky, bad+worse choice decision making to AI and let the algo take the reputational damage.
- Go beat up on automated bots, cars, drones, etc ... How should it feel to kick a robot dog?
For a humorous read on automation bots and AI in a dystopian world, take a look at Quality Land [0]. Really enjoyed it. As a teaser, imagine having some drones suffering from a fear of heights, hence being deemed faulty and sentenced for destruction. Do faulty bots or AI have value in this world even if they don't deliver on their original intended use?
[0] https://www.goodreads.com/book/show/36216607-qualityland
I've been hearing about how $latest_technology is going to eliminate jobs for 40 years. It hasn't happened yet.
Which jobs, exactly, is AI going to eliminate? It's not useful for anything. It doesn't do anything useful. It's just mashing random patterns together to make something that approximates human-readable language.
Eliminating jobs has absolutely happened. How many jobs exist today for newspaper printing? Photograph development? Film development? Call switchboard operation? Technology absolutely eats jobs. There have been more jobs created over time, but the current economic situation makes large scale jobs adjustment work less well.
In a hypothetical world where AI is actually decent enough to be any good at writing software, the demand for software being infinite won't save even one programmer's job because zero programmers will be needed to create any of it. Everyone who needs software will just ask AI to do it for them. Zero programing jobs needed ever again.
I'll concede one thing to the authors of the study, Claude Code is not that great. Everyone I know has moved on since before July. I personally am hacking on my own fork of Qwen CLI (which is itself a Gemini fork) and it does most of what I want with the models of my choice which I swap out depending on what I'm doing. Sometimes they're local on my 4090 and sometimes I use a frontier or larger openweights model hosted somewhere else. If you're expecting a code assistant to drop in your lap and just immediately experience all of its benefits you'll be disappointed. This is not something anyone can offer without just prescribing a stack or workflow. You need to make it your own.
The study is about dropping just 16 people into a tooling they're unfamiliar with, have no mechanical sympathy for, and aren't likely to shape and mold it to their own needs.
You want conclusive evidence go make friends with people who hack their own tooling. Basically everyone I hang out with has extended BMAD, written their own agents.md for specific tasks, make their own slash commands, "skills" (convenient name and PR hijacking of a common practice but whatever, thanks for MCP I guess). Literally what kind of dev are you if you're not hacking your own tools???
You got four ingredients here you have to keep in mind when thinking about this stuff: the model, the context, the prompt, and the tooling. If you're not intervening to set up the best combination of each for each workflow you are doing then you are just letting someone else determine how that workflow goes.
Universal function approximators that can speak english got invented and nobody wants to talk to them is not the scifi future I was hoping for when I was longing for statistical language modeling to lead to code generation back in 2014 as a young NLP practitioner learning Python for the first time.
If you can't make it work fine, maybe it's not for you, but I would probably turn violent if you tried to take this stuff from me.
To be cheeky, there are at least three possibilities you are writing off here: we build _less_ software, we hire _more_ engineers, or things just kinda stay the same.
More on all of these later.
> I am not convinced that software has a growing market
Analysis of market dynamics in response to major technological shocks is reading tea leaves. These are chaotic systems with significant nonlinearities.
The rise of the ATM is a classic example. An obvious but naive predicted result would be fewer employed bank tellers. After all, they're automated _teller_ machines.
However, the opposite happened. ATMs drastically reduced the cost of running a bank branch (which previously required manually counting lots of cash). More branches, fewer tellers per branch... but the net result was _more_ tellers employed thirty years later. [1]
They are, of course, now doing very different things.
Let's now spitball some of those other scenarios above:
- Less "software" gets written. LLMs fundamentally change how people interact with computers. More people just create bespoke programs to do what they want instead of turning to traditional software vendors.
- More engineers get hired. The business of writing software by hand is mostly automated. Engineers shift focus to quality or other newly prioritized business goals, possibly enabled by automating LLMs instead of e.g traditional end to end tests.
- Things employment and software wise stay mostly the same. If software engineers are still ultimately needed to check the output of these things the net effect could just be they spend a bit less time typing raw code. They might work a bit less; attempts to turn everyone into a "LLM tech lead" that manages multiple concurrent LLMs could go poorly. Engineers might mostly take the efficiency gains for themselves as recovered free-ish (HN / Reddit, for example) time.
Or, let's be real, the technology could just mostly be a bust. The odds of that are not zero.
And finally, let's consider the scenario you dismiss ("more software"). It's entirely possible that making something cheaper drastically increases the demand for it. The bar for "quality software" could dramatically raise due to competition between increasingly llm-enhanced firms.
I won't represent any of these scenarios as _likely_, but they all seem plausible to me. There are too many moving parts in the software economy to make any serious prediction on how this will all pan out.
1. https://www.economist.com/democracy-in-america/2011/06/15/ar... (while researching this, I noticed a recent twist to this classic story. Teller employment actually _has_ been declining in the 2020s, as has the total number of ATMs. I can't find any research into this, but a likely culprit is yet another technological shock: the rise of mobile banking and payment apps)
No, they can't. AI cannot produce code.
> The same is true for customer support,
AI cannot provide customer support. It cannot answer questions.
> photography, video production (ads)
AI cannot take photographs or make videos. Or at least, not ones that look like utter trash.
> paralegal work, pharma, and basically any job that involves filing paperwork.
Right, so you'd be happy with a random number generator with a list of words picking what medication you're supposed to get, or preparing your court case?
AI is useless, and always will be. It is not "intelligence", it's crude pattern matching - a big Eliza bot.
Not to you. But it has happened to tens, hundreds of thousands already. Did you miss the whole 2016 election news-cycle?
That's been the promise of every technology. Computers were supposed to make us so productive that that we could all work less and spend time with our families or whatever. Instead productivity went through the roof freeing most people to do even more work for our masters who started demanding more from us even outside the office while real wages stagnated. AI isn't going to make our lives any more carefree than any other technology. It'll just make a small number of extremely wealthy people even richer.
Thankfully, what passes for AI these days is pretty shitty at doing even basic tasks and so it'll be a while before we're all replaced. In the meantime, expect disruptions as companies experiment with letting staff go and replacing them with AI, get disappointed in the results, and hire people back at lower wages. Also expect a lot of companies you depend on to screw you over because their stupid AI did something it shouldn't have and suddenly it's your problem to deal with.
If you think this may be the future, surely the only rational response is to do everything in your power to prevent it.
Or more likely, your entire enterprise collapses against international rivals. Or your entire country turns into North Sentinelese islanders just surviving at the whim of hypertechnical industrialized neighbors.
I'm all for international cooperation on how to preserve a place for humans, I truly am, but the "let's just not do it" is frustratingly naive and not an actual plan.
Amusingly, I was doing some mild chatting with gpt on origins of various religious practices and, depending on the next few years, the quote from article may not seem that far fetched. There is already a level of dumbing down process that was present prior to LLMs entering the scene, but that process has been heavily accelerated.
Don't get me wrong, I actually like LLMs. I am, however, mildly hesitant over the impact on humanity as a whole. This is likely the first time where I consider idiocracy to be a plausible outcome.
Then there's the more interesting, speculative take: sufficiently advanced systems acquire properties associated with deities, such us being everpresent, acting in mysterious way, and seemingly omnipotent or at least omniscient in limited domains. Related, per Arthur C. Clarke, "any sufficiently advanced technology is indistinguishable from magic". I.e. we turn things into religion when it's pragmatic.
Then there's the more speculative fiction take: maybe it's not the first time humanity has been there, maybe propensity for religious practice and thinking is a consequence of humanity's previous, otherwise forgotten, dealings with advanced technology :).