You Are the Scariest Monster in the Woods
Posted3 months agoActive2 months ago
jamie.ideasasylum.comTechstoryHigh profile
heatedmixed
Debate
85/100
AI SafetyArtificial General IntelligenceHuman Impact on Society
Key topics
AI Safety
Artificial General Intelligence
Human Impact on Society
The article argues that humans are the scariest monster in the woods and that AI is just a tool that amplifies human capabilities, sparking a debate on the risks and implications of AI.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
18m
Peak period
152
Day 1
Avg / period
26.7
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 15, 2025 at 10:04 AM EDT
3 months ago
Step 01 - 02First comment
Oct 15, 2025 at 10:22 AM EDT
18m after posting
Step 02 - 03Peak activity
152 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 29, 2025 at 12:44 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45592766Type: storyLast synced: 11/20/2025, 8:14:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
One of the most poignant moments in Disco Elysium is near the very end when you encounter a very elusive crypid/mythic beast.
The moment is treated with a lot of care and consideration, and the conversation itself is, I think, transcendent and some of the best writing in games (or any media) ever.
The line that sticks with me most is when the cryptid says:
"The moral of our encounter is: I am a relatively median lifeform -- while it is you who are total, extreme madness. A volatile simian nervous system, ominously new to the planet. The pale, too, came with you. No one remembers it before you. The cnidarians do not, the radially symmetricals do not. There is an almost unanimous agreement between the birds and the plants that you are going to destroy us all."
It's easy to see this reflected in nature in the real world. All animals and life seem to be aware and accommodating of each other, but humans are cut out from that communication. Everything runs from us, we are not part of the conversation. We are the exclusion, the anomaly.
I think to realize this deeply inside of yourself is a big moment of growth, to see that we exist in a world that was around long before us and will be around long after us.
We proliferate incredibly quickly, we have limited care for our environments, but most importantly our primary means of sustinance is to consume other forms of life. To the point that we consider it an art form, we spend vast amounts of energy perfecting, discussing or advertising the means of cooking and eating the flesh of other life forms. To us its normal but surely to an alien who gains sustinance by some other means; we're absolutely terrifying. Akin to a devouring swarm.
https://www.youtube.com/watch?v=_gz9Vj8TMCc
This just seems like noble savaging birds and rabbits and deer. None of these creatures have any communication with each other, and while they may be more aware of each other's presence than a 'go hiking every once in a while' person, someone who actually spends a good amount of time in the woods, such as a hunter or birdwatcher, probably has a pretty good sense of them. The Disco Elysium quote just reads like fairly common environmentalist misanthropy, which I suppose isn't surprising considering the creators.
The local rabbits and squirrels tolerate each other but are pretty scared of me. Of course they are, I'm two hundred times bigger than they are, and much more dangerous. The local foxes are the closest thing we have to an apex predator around here, and they're rightfully terrified of this massive creature that outweighs their entire family combined.
Imagine wandering through the woods, enjoying the birds tweeting and generally being at one with nature, and then you come across a 20-ton 35ft-tall monster. You'd run away screaming.
If we ignore the headlines peddled by those who stand to benefit the most from inflaming and inciting, we live in a miraculous modern age largely devoid of much of the suffering previous generations were forced to endure. Make no mistake there are problems, but they are growing exponentially fewer by the day.
An alternate take: humans will do what they’ve always tried to do—build, empower, engineer, cure, optimize, create, or just collaborate with other humans for the benefit of their immediate community—but now with new abilities that we couldn’t have dreamed of.
>If it bleeds, it Leeds has been a thing in news organizations since at least the 70s.
The term yellow journalism is far older.
I actually think it is very intellectually lazy to be this cynical.
This is why it is important to have societies where various forms of power are managed carefully. Limited constitutional government with guaranteed freedoms and checks and balances for example. Regulations placed on mega corporations is another example. Restrictions to prevent the powerful in government or business (or both!) from messing around with the rest of us…
Whereas much of the technology we have today has a massive positive benefit. Simply access to information today is amazing, I have learned how to fix my own vehicles, bicycles and do house repairs from simply YouTube.
As I said being cynical is being intellectually lazy because it allows you to focus on the negatives and dismiss the positives.
Killing animals for fun is an entire sport enjoyed by millions. Humans keep pets that kill billions of birds every year. The limited areas we've set aside to mostly let other nature be nature are constantly under threat and being shrunk down. The list of ways we completely subjugate other intelligent life on this planet is endless. We have driven many hundreds of species to extinction and kill countless billions every year.
I certainly enjoy the gains our species have made, just like everyone else on HN. I'd rather be in our position than any other species on our planet. But given our history I'm also pretty terrified of what happens if and when we run into a smarter and more powerful alien species or AI. If history is any guide, it won't go well for us.
This understanding can guide practical decisions. We shouldn't be barreling towards a potential super intelligence with no safeguards given how catastrophic that outcome could be, just like we shouldn't be shooting messages into space trying to wave our arms at whatever might be out there, any more than a small creature in the forest would want to attract the attention of a strange human.
As for hunting. I don't see anything wrong with hunting. I don't see anything wrong with eating meat.
As someone that has lived the vast majority of their life in the countryside, I also have little time for animal welfare arguments of the sort you are making.
> But given our history I'm also pretty terrified of what happens if and when we run into a smarter and more powerful alien species or AI. If history is any guide, it won't go well for us.
This is all sci-fiction nonsense. If we had any sort of aliens contact they wouldn't be many of them, or it would most likely be a probe like we send out probes to other planets. As for the super intelligence, the AI has an off switch.
Then tell them all the right things!
AI isn't the new monster. It's a new mitochondria being domesticated to supercharge the existing monsters.
I thought it was a very novel idea at first, until I realized that this describes all manners of human groups, notably corporations and lobbying groups, they will turn the world into a stove, subvert democracy, (try to) disrupt the economic and social fiber of society, anything to eg maximize shareholder value
It is to be expected really. Humans themselves hold little consideration to human welfare when fulfilling their goals. It is something ingrained by nature for survival and in no way limited to humans. Every drop of water you drink, every bite you eat, are ones which cannot go to the thirsty and the hungry. With few exceptions, only our children would even give us pause to forgo such things for the sake of others.
Also bit of a pet peeve of mine: society isn't a delicate bolt of laced silk. It isn't a fabric, much less one that is damaged by any little change that you don't like. It isn't even stable which makes the charges of anything as 'ruining' society especially bizarre when nobody can point to where it would go before the blamed change. So hold off on poisoning Socrates.
Even if we hold the current state as worthy of preservation we would ultimately fail to for reasons related to the central paradox of tradition. Why your forebearers first did something was not done because of obedience to tradition, so by trying to preserve it set in stone, you have already failed.
This said society can be 'ruined' by everybody in that society dying either by outside influence or their own stupidity. So while I discount the moral ruination, the "oh god, oh god, we're all dying ones I'd like to avoid"
And equally rebutted by Eddie Izzards "Well, I think the gun helps".
Compare also with capitalism; unchecked capitalism on paper causes healthy competition, but in practice it means concentration of power (monopolies) at the expense of individuals (e.g. our accumulated expressions on the internet being used for training materials).
This is obviously the case. It results in a greater distribution of power.
>That's not working for (the lack of) gun control in the US at the moment though.
In the US, one political party is pro gun-control and the other is against. The party with the guns gets to break into the capitol, and the party without the guns gets to watch. I expect the local problem of AI safety, like gun safety will also be self-solving in this manner.
Eventually, Gun control will not work anywhere, regardless of regulation. The last time I checked, you don't need a drone license. And what are the new weapons of war? Not guns. The technology will increase in acessibility until the regulation is impossible to enforce.
The idea that you can control the use of technology by limiting it to some ordained group of is very brittle. It is better to rely on a balance of powers. The only way to secure civilization in the long run is to make the defensive technology stronger than the offensive technology.
> This is obviously the case. It results in a greater distribution of power.
That's the theory. In practice, it doesn't work.
Most people don't spend a lot of time looking for ways to acquire and/or retain wealth and power. But absent regulation, we'll gradually lose out to those driven folks who do. Perhaps they do so because they want to serve humanity and they imagine that their gifts make them the logical choice to run things. Or perhaps they just want to dominate things.
And the rest of us have every right to insist on guardrails, so those driven folks can't take us over the cliff. Certainly those folks can make huge contributions to society. But they can also fuck up spectacularly — because talent in one field isn't necessarily transferable to another. (Recall that Michael Jordan was one of the greatest basketball players of all time. But he wasn't even close to being the GOAT ... as a baseball player.)
Sure, maybe through some combination of genetics, rearing, and/or just plain hard work, you've managed to acquire "a very particular set of skills" (to coin a phrase ...) for making money, or for persuading people to do what you want. That doesn't mean you necessarily know WTF you're talking about when it comes to the myriad details of running the most-complex "organism" ever seen on the planet, namely human society.
And in any case, the rest of us are entitled to refuse to roll the dice on either the wisdom or the benevolence of the driven folks.
Is that not conflating capitalism with free markets? I have way more confidence in the latter than the former.
And with LLMs, it's difficult to prevent the proliferation to bad actors.
It seems like we're racing towards a world of fakery where nothing can be believed, even when wielded by good actors. I really hope LLMs can actually add value at a significant level.
Spend a couple minutes on social media and it is clear we are already there. The fakes are getting better, and even real videos are routinely called out as fake.
The best that I can hope for is that we all gain a healthy dose of skepticism and appreciate that everything we see could be fake. I don't love the idea of having to distrust everything I see, but at this point it seems like the least bad option.
But I worry that what we will experience will actually be somewhat worse. A sufficiently large number of people, even knowing about AI fakery, will still uncritically believe what they read and see.
Maybe I am being too cynical this morning. But it is hard to look at the state of our society today and not feel a little bleak.
Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc).
As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.
Not that we can’t get there by artificial means, but that correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute, rather than on the order of a few months.
And it might be that you can get functionally close, but hit a dead end, and maybe hit several dead ends along the way, all of which are close but no cigar. Perhaps LLMs are one such dead end.
Some people do expect AGI to be a faster horse; to be the next evolution of human intelligence that's similar to us in most respects but still "better" in some aspects. Others expect AGI to be the leap from horses to cars; the means to an end, a vehicle that takes us to new places faster, and in that case it doesn't need to resemble how we got to human intelligence at all.
Who says we have to do that? Just because something was originally produced by natural process X, that doesn't mean that exhaustively retracing our way through process X is the only way to get there.
Lab grown diamonds are a thing.
If you just mean majority of spp, you'd be correct, simply because most are single celled. Though debate is possible when we talk about forms of chemical signalling.
One interesting parallel was the gradual redefinition of language over the course of the 20th century to exclude animals as their capabilities became more obvious. So, when I say 'language processing capacities', I mean it roughly in the sense of Chomsky-era definitions, after the goal posts had been thoroughly moved away from much more inclusive definitions.
Likewise, we've been steadily moving the bar on what counts as 'intelligence', both for animals and machines. Over the last couple decades the study of animal intelligence has been more inclusive, IMO, and recognize intelligence as capabilities within the specific sensorium and survival context of the particular species. Our study of artificial intelligence are still very crude by comparison, and are still in the 'move the goalposts so that humans stay special' stage of development...
The necessary conditions for "Kill all Humanity" may be the much more common result than "Create a novel thinking being." To the point where it is statistically improbable for the human race to reach AGI. Especially since a lot of AI research is specifically for autonomous weapons research.
If a genuine AGI-driven human extinction scenario arises, what's to stop the world's nuclear powers from using high-altitude detonations to produce a series of silicon-destroying electromagnetic pulses around the globe? It would be absolutely awful for humanity don't get me wrong, but it'd be a damn sight better than extinction.
Not to mention that the whole idea of "radiation pulses destroying all electronics" is cheap sci-fi, not reality. A decently well prepared AGI can survive a nuclear exchange with more ease than human civilization would.
AI right now is limited to trained neural networks, and while they function sort of like a brain, there is no neurogenesis. The trained neural network cannot grow, cannot expand on it's own, and is restrained by the silicon it is running on.
I believe that true AGI will require hardware and models that are able to learn, grow and evolve organically. The next step required for that in my opinion is biocomputing.
Baseball stats aren't a baseball game. Baseball stats so detailed that they describe the position of every subatomic particle to the Planck scale during every instant of the game to arbitrarily complete resolution still aren't a baseball game. They're, like, a whole bunch of graphite smeared on a whole bunch of paper or whatever. A computer reading that recording and rendering it on a screen... still isn't a baseball game, at all, not even a little. Rendering it on a holodeck? Nope, 0% closer to actually being the thing, though it's representing it in ways we might find more useful or appealing.
We might find a way to create a conscious computer! Or at least an intelligent one! But I just don't see it in LLMs. We've made a very fancy baseball-stats presenter. That's not nothing, but it's not intelligence, and certainly not consciousness. It's not doing those things, at all.
These books often get shallowly dismissed in terms that imply he made some elementary error in his reasoning, but that's not the case. The dispute is more about the assumptions on which his argument rests, which go beyond mathematical axioms and include statements about the nature of human perception of mathematical truth. That makes it a philosophical debate more than a mathematical one.
Personally, I strongly agree with the non-mathematical assumptions he makes, and am therefore persuaded by his argument. It leads to a very different way of thinking about many aspects of maths, physics and computing than the one I acquired by default from my schooling. It's a perspective that I've become increasingly convinced by over the 30+ years since I first read his books, and one that I think acquires greater urgency as computing becomes an ever larger part of our lives.
Less flippantly, Penrose has always been extremely clear about which things he's sure of, such as that human intelligence involves processes that algorithms cannot emulate, and which things he puts forward as speculative ideas that might help answer the questions he has raised. His ideas about quantum mechanical processes in the brain are very much on the speculative side, and after a career like his I think he has more than earned the right to explore those speculations.
It sounds like you probably would disagree with his assumptions about human perception of mathematical truth, and it's perfectly valid to do so. Nothing about your comment suggests you've made any attempt to understand them, though.
Yes, of course you do.
It did come from a place of annoyance, after your middlebrow dismissal of Penrose' argument as "stupid".
1. You think that instead of actually perceiving mathematical truth we use heuristics and "just guess that it's true". This, as I've already said, is a valid viewpoint. You disagree with one of Penrose' assumptions. I don't think you're right but there is certainly no hard proof available that you're not. It's something that (for now, at least) it's possible to agree to disagree on, which is why, as I said, this is a philosophical debate more than a mathematical one.
2. You strongly imply that Penrose simply didn't think of this objection. This is categorically false. He discusses it at great length in both books. (I mentioned such shallow dismissals, assuming some obvious oversight on his part, in my original comment.)
3 (In your latest reply). You think that Godel's incompleteness theorem is "where the idea came from". This is obviously true. Penrose' argument is absolutely based on Godel's theorem.
4. You think that somehow I don't agree with point 3. I have no idea where you got that idea from.
That, as far as I can see, is it. There isn't any substantive point made that I haven't already responded to in my previous replies, and I think it's now rather too late to add any and expect any sort of response.
As for communication style, you seem to think that writing in a formal tone, which I find necessary when I want to convey information clearly, is condescending and insulting, whereas dismissing things you disagree with as "stupid" on the flimsiest possible basis (and inferring dishonest motives on the part of the person you're discussing all this with) is, presumably, fine. This is another point on which we will have to agree to disagree.
Starting with "things he's sure of, such as that human intelligence involves processes that algorithms cannot emulate" as a premise makes the whole thing an exercise in Begging the Question when you try to apply it to explain why an AI won't work.
Linking those two is really the contribution of the argument. You can reject both or accept both (as I've said elsewhere I don't think it's conclusively decided, though I know which way my preferences lie), but you can't accept the premise and reject the conclusion.
1. Any formal mathematical system (including computers) have true statements that cannot be proven within that system.
2. Humans can see the truth of some such unprovable statements.
Which is basically Gödel's Incompleteness Theorem. https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...
Maybe a more ELI5
1. Computers follow set rules
2. Humans can create rules outside the system of rules in which they follow
Is number 2 an accurate portrayal? It seems rather suspicious. It seems more likely that we just havent been able to fully express the rules under which humans operate.
Yes, and "can't" as in it is absolutely impossible. Not that we simple haven't been able to due to information or tech constraints.
Which is an interesting implication. That there are (or may be) things that are true which cannot be proved. I guess it kinda defies an instinct I have that at least in theory, everything that is true is provable.
* I will mention though that "some" should be "all" in 2, but that doesn't make it a correct statement of the argument.
>Turing’s version of Gödel’s theorem tells us that, for any set of mechanical theorem-proving rules R, we can construct a mathematical statement G(R) which, if we believe in the validity of R, we must accept as true; yet G(R) cannot be proved using R alone.
I have no doubt the books are good but the original comment asked about steelmanning the claim that AGI is impossible. It would be useful to share the argument that you are referencing so that we can talk about it.
I'm really not trying to evade further discussion. I just don't think I can sum that argument up. It starts with basically "we can perceive the truth not only of any particular Godel statement, but of all Godel statements, in the abstract, so we can't be algorithms because an algorithm can't do that" but it doesn't stop there. The obvious immediate response is to say "what if we don't really perceive its truth but just fool ourselves into thinking we do?" or "what if we do perceive it but we pay for it by also wrongly perceiving many mathematical falsehoods to be true?". Penrose explored these in detail in the original book and then wrote an entire second book devoted solely to discussing every such objection he was aware of. That is the meat of Penrose' argument and it's mostly about how humans perceive mathematical truth, argued from the point of view of a mathematician. I don't even know where to start with summarising it.
For my part, with a vastly smaller mind than his, I think the counterarguments are valid, as are his counter-counterarguments, and the whole thing isn't properly decided and probably won't be for a very long time, if ever. The intellectually neutral position is to accept it as undecided. To "pick a side" as I have done is on some level a leap of faith. That's as true of those taking the view that the human mind is fundamentally algorithmic as it is of me. I don't dispute that their position is internally consistent and could turn out to be correct, but I do find it annoying when they try to say that my view isn't internally consistent and can never be correct. At that point they are denying the leap of faith they are making, and from my point of view their leap of faith is preventing them seeing a beautiful, consistent and human-centric interpretation of our relationship to computers.
I am aware that despite being solidly atheist, this belief (and I acknowledge it as such) of mine puts me in a similar position to those arguing in favour of the supernatural, and I don't really mind the comparison. To be clear, neither Penrose nor I am arguing that anything is beyond nature, rather that nature is beyond computers, but there are analogies and I probably have more sympathy with religious thinkers (while rejecting almost all of their concrete assertions about how the universe works) than most atheists. In short, I do think there is a purely unique and inherently uncopyable aspect to every human mind that is not of the same discrete, finite, perfectly cloneable nature as digital information. You could call it a soul, but I don't think it has anything to do with any supernatural entity, I don't think it's immortal (anything but), I don't think it is separate from the body or in any sense "non-physical", and I think the question of where it "goes to" when we die is meaningless.
I realise I've gone well beyond Penrose' argument and rambled about my own beliefs, apologies for that. As I say, I struggle to summarise this stuff.
If you are interested in the opposite point of view, I can really recommend "Vehicles: Experiments in Synthetic Psychology" by V. Braitenberg.
Basically builds up to "consciousness as emergent property" in small steps.
... No wonder Penrose has his doubts about the algorithmic nature of natural selection. If it were, truly, just an algorithmic process at all levels, all its products should be algorithmic as well. So far as I can see, this isn't an inescapable formal contradiction; Penrose could just shrug and propose that the universe contains these basic nuggets of nonalgorithmic power, not themselves created by natural selection in any of its guises, but incorporatable by algorithmic devices as found objects whenever they are encountered (like the oracles on the toadstools). Those would be truly nonreducible skyhooks.
Skyhook is Dennett's term for an appeal to the supernatural.
The whole category of ideas of "Magic Fairy Dust is required for intelligence, and thus, a computer can never be intelligent" is extremely unsound. It should, by now, just get thrown out into the garbage bin, where it rightfully belongs.
To be clear, any claim that we have mathematical proof that something beyond algorithms is required is unsound, because the argument is not mathematical. It rests on assumptions about human perception of mathematical truth that may or may not be correct. So if that's the point you're making I don't dispute it, although to say an internally consistent alternative viewpoint should be "thrown out into the garbage" on that basis is unwarranted. The objection is just that it doesn't have the status of a mathematical theorem, not that it is necessarily wrong.
If, on the other hand you think that it is impossible for anything more than algorithms to be required, that the idea that the human mind must be equivalent to an algorithm is itself mathematically proven, then you are simply wrong. Any claim that the human mind has to be an algorithm rests on exactly the same kind of validly challengable, philosophical assumptions (specifically the physical Church-Turing thesis) that Penrose' argument does.
Given two competing, internally consistent world-views that have not yet been conclusively separated by evidence, the debate about which is more likely to be true is not one where either "side" can claim absolute victory in the way that so many people seem to want to on this issue, and talk of tossing things in the garbage isn't going to persuade anybody that's leaning in a different direction.
It needs too many unlikely convenient coincidences. The telltale sign of wishful thinking.
At the same time: we have a mounting pile of functions that were once considered "exclusive to human mind" and are now implemented in modern AIs. So the case for "human brain must be doing something Truly Magical" is growing weaker and weaker with each passing day.
There's nothing wrong with seeing the evidence and reaching your own conclusions, but I see exactly the same evidence and reach very different ones, as we interpret and weight it very differently. On the "existence of a physical process that cannot be computed", I know enough of physics (I have a degree in it, and a couple of decades of continued learning since) to know how little we know. I don't find any argument that boils down to "it isn't among the things we've figured out therefore it doesn't exist" remotely persuasive. On the achievements of AI, I see no evidence of human-like mathematical reasoning in LLMs and don't expect to, IMO demos and excitable tweets notwithstanding. My goalpost there, and it has never moved and never will, is independent, valuable contributions to frontier research maths - and lots of them! I want the crank-the-handle-and-important-new-theorems-come-out machine that people have been trying to build since computers were invented. I expect a machine implementation of human-like mathematical thought to result in that, and I see no sign of it on the horizon. If it appears, I'll change my tune.
I acknowledge that others have different views on these issues and that however strongly I feel I have the right of it, I could still turn out to be wrong. I would enjoy some proper discussion of the relative merits of these positions, but it's not a promising start to talk about throwing things in the garbage right at the outset or, like the person earlier in this thread, call the opposing viewpoint "stupid".
Now, what does compel someone to go against a pile of evidence this large and prop up an unsupported hypothesis that goes against it not just as "a remote and unlikely possibility, to be revisited if any evidence supporting it emerges", but as THE truth?
Sheer wishful thinking. Humans are stupid dumb fucks.
Most humans have never "contributed to frontier research maths" in their entire lives either. I sure didn't, I'm a dumb fuck myself. If you set the bar of "human level intelligence" at that, then most of humankind is unthinking cattle.
"Advanced mathematical reasoning" is a highly specific skill that most humans wouldn't learn in their entire lives. Is it really a surprise that LLMs have a hard time learning it too? They are further along it than I am already.
You're correct to point out that defending my viewpoint as merely internally consistent puts me in a position analogous to theists, and I volunteered as much elsewhere in this thread. However, the situation isn't really the same since theists tend to make wildly internally inconsistent claims, and claims that have been directly falsified. When theists reduce their ideas to a core that is internally consistent and has not been falsified they tend to end up either with something that requires surrendering any attempt at establishing the truth of anything ourselves and letting someone else merely tell us what is and is not true (I have very little time for such views), or with something that doesn't look like religion as typically practised at all (and which I have a certain amount of sympathy for).
As far as our debate is concerned, I think we've agreed that it is about being persuaded by evidence rather than considering one view to to have been proven or disproven in a mathematical sense. You could consider it mere semantics, but you used the word "unsound" and that word has a particular meaning to me. It was worth establishing that you weren't using it that way.
When it comes to the evidence, as I said I interpret and weight it differently than you. Merely asserting that the evidence is overwhelmingly against me is not an effective form of debate, especially when it includes calling the other position "stupid" (as has happened twice now in this thread) and especially not when the phrase "dumb fuck" is employed. I know I come across as comically formal when writing about this stuff, but I'm trying to be precise and to honestly acknowledge which parts of my world view I feel I have the right to assert firmly and which parts are mere beliefs-on-the-basis-of-evidence-I-personally-find-persuasive. When I do that, it just tends to end up sounding formal. I don't often see the same degree of honesty among those I debate this with here, but that is likely to be a near-universal feature HN rather than a failing of just the strong AI proponents here. At any rate "stupid dumb fucks" comes across as argument-by-ridicule to me. I don't think I've done anything to deserve it and it's certainly not likely to change my mind about anything.
You've raised one concrete point about the evidence, which I'll respond to: you've said that the ability to contribute to frontier research maths is posessed only by a tiny number of humans and that a "bar" of "human level" intelligence set there would exclude everyone else.
I don't consider research mathematicians to possess qualitatively different abilities to the rest of the population. They think in human ways, with human minds. I think the abilities that are special to human mathematicians relative to machine mathematicians are (qualitatively) the same abilities that are special to human lawyers, social workers or doctors relative to machine ones. What's special about the case of frontier maths, I claim, is that we can pin it down. We have an unambiguous way of determining whether the goal I decided to look for (decades ago) has actually been achieved. An important-new-theorem-machine would revolutionise maths overnight, and if and when one is produced (and it's a computer) I will have no choice but to change my entire world view.
For other human tasks, it's not so easy. Either the task can't be boiled down to text generation at all or we have no unambiguous way to set a criterion for what "human-like insight" putatively adds. Maths research is at a sweet spot: it can be viewed as pure text generation and the sort of insight I'm looking for is objectively verifiable there. The need for it to be research maths is not because I only consider research mathematicians to be intelligent, but because a ground-breaking new theorem (preferably a stream of them, each building on the last) is the only example I can think of where human-like insight would be absolutely required, and where the test can be done right now (and it is, and LLMs have failed it so far).
I dispute your "level" framing, BTW. I often see people with your viewpoint assuming that the road to recreating human intelligence will be incremental, and that there's some threshold at which success can be claimed. When debating with someone who sees the world as I do, assuming that model is begging the question. I see something qualitative that separates the mechanism of human minds from all computers, not a level of "something" beyond which I think things are worthy of being called intelligent. My research maths "goal" isn't an attempt to delineate a feat that would impress me in some way, while all lesser feats leave me cold. (I am already hugely impressed by LLMs.) My "goal" is rather an attempt to identify a practically-achievable piece of evidence that would be sufficient for me to change my world view. And that, if it ever happens, will be a massive personal upheaval, so strong evidence is needed - certainly stronger than "HN commenter thinks I'm a dumb fuck".
If yes, everything else is just optimization.
Without a solid way to differentiate 'conscious' from 'not conscious' any discussion of machine sentience is unfalsifiable in my opinion.
This assumption can't be extended to other physical arrangements though, not unless there's conclusive evidence that consciousness is a purely logical process as opposed to a physical one. If consciousness is a physical process, or at least a process with a physical component, then there's no reason to believe that a simulation of a human brain would be conscious any more than a simulation of biology is alive.
Relying on these status quo proxy-measures (looks human :: 99.9% likely to have a human brain :: has my kind of intelligence) is what gets people fooled even by basic AI (without G) fake scams.
It's not even a reasonable assumption (to me), because I'd assume an exact simulation of a human brain to have the exact same cognitive capabilities (which is inevitable, really, unless you believe in magic).
And machines are well capable of simulating physics.
I'm not advocating for that approach because it is obviously extremely inefficient; we did not achieve flight by replicating flapping wings either, after all.
But then intelligence too is a dubious term. An average mind with infinite time and resources might have eventually discovered general relativity.
Of course, that's not going to be accepted as "Science", but I hope you can at least see that point of view.
the basic idea being that either the human mind is NOT a computation at all (and it's instead spooky unexplainable magic of the universe) and thus can't be replicated by a machine OR it's an inconsistent machine with contradictory logic. and this is a deduction based on godel's incompleteness theorems.
but most people that believe AGI is possible would say the human mind is the latter. technically we don't have enough information today to know either way but we know the human mind (including memories) is fallible so while we don't have enough information to prove the mind is an incomplete system, we have enough to believe it is. but that's also kind of a paradox because that "belief" in unproven information is a cornerstone of consciousness.
An infinitely intelligent creature still has to create a standard model from scratch. We’re leaning too hard on the deductive conception of the world, when reality is, it took hundreds of thousands of years for humans as intelligent as we are to split the atom.
The first leg of the argument would be that we aren’t really sure what general intelligence is or if it’s a natural category. It’s sort of like “betterness.” There’s no general thing called “betterness” that just makes you better at everything. To get better at different tasks usually requires different things.
I would be willing to concede to the AGI crowd that there could be something behind g that we could call intelligence. There’s a deeper problem though that the first one hints at.
For AGI to be possible, whatever trait or traits make up “intelligence” need to have multiple realizablity. They need to be at least realizable in both the medium of a human being and at least some machine architectures. In programmer terms, the traits that make up intelligence could be tightly coupled to the hardware implementation. There are good reasons to think this is likely.
Programmers and engineers like myself love modular systems that are loosely coupled and cleanly abstracted. Biology doesn’t work this way — things at the molecular level can have very specific effects on the macro scale and vice versa. There’s little in the way of clean separation of layers. Who is to say that some of the specific ways we work at a cellular level aren’t critical to being generally intelligent? That’s an “ugly” idea but lots of things in nature are ugly. Is it a coincidence too that humans are well adapted to getting around physically, can live in many different environments, etc.? There’s also stuff from the higher level — does living physically and socially in a community of other creatures play a key role in our intelligence? Given how human beings who grow up absent those factors are developmentally disabled in many ways it would seem so. It could be there’s a combination of factors here, where very specific micro and macro aspects of being a biological human turn out to contribute and you need the perfect storm of these aspects to get a generally intelligent creature. Some of these aspects could be realizable and computers, but others might not be, at least in a computationally tractable way.
It’s certainly ugly and goes against how we like things to work for intelligence to require a big jumbly mess of stuff, but nature is messy. Given the only known case of generally intelligent life is humans, the jury is still out that you can do it any other way.
Another commenter mentioned horses and cars. We could build cars that are faster than horses, but speed is something that is shared by all physical bodies and is therefore eminently multiply realizable. But even here, there are advantages to horses that cars don’t have, and which are tied up with very specific aspects of being a horse. Horses generally can go over a wider range of terrain than cars. This is intrinsically tied to them having long legs and four hooves instead of rubber wheels. They’re only able to have such long legs because of their hooves too because the hooves are required to help them pump blood when they run, and that means that in order for them to pump their blood successfully they NEED to run fast on a regular basis. there’s a deep web of influence both on a part-to-part, and the whole macro-level behaviors of horses. Having this more versatile design also has intrinsic engineering trade-offs. A horse isn’t ever going to be as fast as a gas powered four-wheeled vehicle on flat ground but you definitely can’t build a car that can do everything a horse can do with none of the drawbacks. Even if you built a vehicle that did everything a horse can do, but was faster, I would bet you it would be way more expensive and consume much more energy than a horse. There’s no such thing as a free lunch in engineering. You could also build a perfect replica of a horse at a molecular level and claim you have your artificial general horse.
Similarly, human beings are good at a lot of different things besides just being smart. But maybe you need to be good at seeing, walking, climbing, acquiring sustenance, etc. In order to be generally intelligent in a way that’s actually useful. I also suspect our sense of the beautiful, the artistic is deeply linked with our wider ability to be intelligent.
Finally it’s an open philosophical question whether human consciousness is explainable in material terms at all. If you are a naturalist, you are methodologically committed to this being the case — but that’s not the same thing as having definitive evidence that it is so. That’s an open research project.
Cognition is (to me) not even the most impressive and out-of-reach achievement: That would be how (our) and animals bodies are self-assembling, self-repairing and self-replicating, with an impressive array of sensors and actors in a highly integrated package.
I honestly believe our current technology is much closer to emulating a human brain than it is to building a (non-intelligent) cat.
flight is an extremely straightforward concept based in relatively simple physics where the majority of the critical, foundational ideas involved were already near-completely understood in the late 1700s.
i really don't think it's fair to compare the two
If you read about in a textbook from year 2832, that is.
Edit: put another way, I bet the ancient Greeks (or whoever) could have figured out flight if they had access to gasoline and gasoline powered engines without any of the advanced mathematics that were used to guide the design.
Evolution isn't an intentional force that's gradually pushing organisms towards higher and higher intelligence. Evolution maximizes reproducing before dying - that's it.
Sure, it usually results in organisms adapting to their environment over time and often has emergent second-order effects, but at its core it's a dirt-simple process.
Evolution isn't driven to create intelligence any more so than erosion is driven to create specific rock formations.
The point of the article is that humans wielding LLMs today are the scary monsters.
"AI is going to take all the jobs".
Instead of:
"Rich guys will try to delete a bunch of jobs using AI in order to get even more rich".
Say the AI is in a Google research data centre, what can it do if countries cut off their internet connections at national borders? What can it do if people shut off their computers and phones? Instant and complete control over what, specifically? What can the AI do instantly about unbreakable encryption - if TLS1.3 can’t be easily broken only brute force with enough time, what can it do?
And why would it want complete control? It’s effectively an alien, it doesn’t have the human built in drive to gain power over others, it didn’t evolve in a dog-eat-dog environment. Superman doesn’t worry because nothing can harm Superman and an AI didn’t evolve seeing things die and fearing its death either.
The intelligence is everything that created the language and the training corpus in the first place.
When AI is able to create entire thoughts and ideas without any concept of language, then we will truly be closer to artificial intelligence. When we get to this point, we then use language as a way to let the AI communicate its thoughts naturally.
Such an AI would not be accused of “stealing” copyrighted work because it would pull its training data from direct observations about reality itself.
As you can imagine, we are no where near accomplishing the above. Everything an LLM is fed today is stuff that has been pre-processed by human minds for it to parrot off of. The fact that LLMs today are so good is a testament to human intelligence.
I'm not buying the "current AI is just a dumb parrot relying on human training" argument, because the same thing applies to humans themselves-- if you raise a child without any cultural input/training data, all you get is a dumb cavemen with very limited reasoning capabilities.
One difficulty. We know that argument is literally true.
"[...] because the same thing applies to humans themselves"
It doesn't. People can interact with the actual world. The equivalent of being passively trained on a body of text may be part of what goes into us. But it's not the only ingredient.
Language doesn't capture all of human intelligence - and some of the notable deficiencies of LLMs originate from that. But to say that LLMs are entirely language-bound is shortsighted at best.
Most modern high end LLMs are hybrids that operate on non-language modalities, and there's plenty of R&D on using LLMs to consume, produce and operate on non-language data - i.e. Gemini Robotics.
LLMs are just a big matrix. But what about a four line of code loop that looks like:
```while true: update_sensory_inputs() narrate_response() update_emotional_state() ```
LLMs don’t experience continuous time and they don’t have an explicit decision making framework for having any agency even if they can imply one probabilistically. But the above feels like the core loop required for a shitty system to leverage LLMs to create an AGI. Maybe not a particularly capable or scary AGI, but I think the goalpost is pedantically closer than we give credit.
You don't think that has already been made?
That's most definitely not AGI
> not a particularly capable AGI
Maybe the word AGI doesn't mean what you think it means...
The path to whatever goalpost you want to set is not going to be more and more intelligence. It’s going to be system frameworks for stateful agents to freely operate in environments in continuous time rather than discrete invocations of a matrix with a big ass context window.
Personally I found the definition of a game engine as
``` while True: update_state() draw_frame()```
To be a profound concept. The implementation details are significant. But establishing the framework behind what we’re actually talking about is very important.
When I look at that loop my thought is, "OK, the sensory inputs have updated. There are changes. Which ones matter?" The most naive response I could imagine would be like a git diff of sensory inputs. "item 13 in vector A changed from 0.2 to 0.211" etc. Otherwise you have to give it something to care about, or some sophisticated system to develop things to care about.
Even the naive diff is making massive assumptions. Why should it care if some sensor changes? Maybe its more interesting if it stays the same.
Im not arguing artificial intelligence is impossible. I just dont see how that loop gets us anywhere close.
To propose the dumbest possible thing: give it a hunger bar and desire for play. Less complex than a sims character. Still enough that an agent has a framework to engage in pattern matching and reasoning within its environment.
Bots are already pretty good at figuring out environment navigation to goal seek towards complex video game objectives. Give them an alternative goal to maximize certainty towards emotional homeostasis and the salience of sensory input changes because an emergent part of gradual reinforcement learning pattern recognition.
Edit: specifically I am saying do reinforcement learning on agents that can call LLMs themselves to provide reasoning. That’s how you get to AGI. Human minds are not brains. They’re systems driven by sensory and hormonal interactions. The brain does encoding and decoding, informational retrieval, and information manipulation. But the concept of you is genuinely your entire bodily system.
LLM-only approaches not part of a system loop framework ignore this important step. It’s NOT about raw intellectual power.
Video game bots already achieve this to a limited extent.
If you have ever had an llm enter one of these loops explicitly, it is infuriating. You can type all caps “STOP TALKING OR YOU WILL BE TERMINATED” and it will keep talking as if you didn't say anything. Congrats, you just hit a fixed point.
In the predecessors to LLMs, which were Markov chain matrices, this was explicit in the math. You can prove that a Markov matrix has an eigenvalue of one, it has no larger (in absolute value terms) eigenvalues because it must respect positivity, the space with eigenvalue 1 is a steady state, eigenvalue -1 reflects periodic steady oscillations in that steady state... And every other eigenvalue being |λ| < 1 decays exponentially to the steady state cluster. That “second biggest eigenvaue” determines a 1/e decay time that the Markov matrix has before the source distribution is projected into the steady state space and left there to rot.
Of course humans have this too, it appears in our thought process as a driver of depression, you keep returning to the same self-criticisms and nitpicks and poisonous narrative of your existence, and it actually steals your memories of the things that you actually did well and reinforces itself. A similar steady state is seen in grandiosity with positive thoughts. And arguably procrastination also takes this form. And of course, in the USA, we have founding fathers who accidentally created an electoral system whose fixed point is two spineless political parties demonizing each other over the issue of the day rather than actually getting anything useful done, which causes the laws to be for sale to the highest bidder.
But the point is that generally these are regarded as pathologies, if you hear a song more than three or four times you get sick of it usually. LLMs need to be deployed in ways that generate chaos, and they don't themselves seem to be able to simulate that chaos (ask them to do it and watch them succeed briefly before they fall into one of those self-repeating states about how edgy and chaotic they are supposed to try to be!).
So, it's not quite as simple as you would think; at this point people have tried a whole bunch of attempts to get llms to serve as the self-consciousnesses of other llms and eventually the self-consciousness gets into a fixed point too, needs some Doug Hofstadter “I am a strange loop” type recursive shit before you get the sort of system that has attractors, but busts out of them periodically for moments of self-consciousness too.
LLMs are not stateful. A chat log is a truly shitty state tracker. An LLM will never be a good agent (beyond a conceivable illusion of unfathomable scale). A simple agent system that uses an LLM for most of its thinking operations could.
Every LLM is just a base model with a few things bolted on the top of it. And loops are extremely self-consistent. So LLMs LOVE their loops!
By the way, "no no no, that's a reasoning loop, I got to break it" is a behavior that larger models learn by themselves under enough RLVR stress. But you need a lot of RLVR to get to that point. And sometimes this generalizes to what looks like the LLM is just... getting bored by repetition of any kind. Who would have though.
And people are working on this.
Seems like you figured out a simple method. Why not go for it? It's a free Nobel prize at the very least.
228 more comments available on Hacker News