"our Research Is Greatly Sped Up by AI but AI Still Needs Us"
Key topics
"I crossed an interesting threshold yesterday, which I think many other mathematicians have been crossing recently as well. In the middle of trying to prove a result, I identified a statement that looked true and that would, if true, be useful to me.
"Instead of trying to prove it, I asked GPT5 about it, and in about 20 seconds received a proof. The proof relied on a lemma that I had not heard of (the statement was a bit outside my main areas), so although I am confident I'd have got there in the end.
"the time it would have taken me would probably have been of order of magnitude an hour (an estimate that comes with quite wide error bars). So it looks as though we have entered the brief but enjoyable era where our research is greatly sped up by AI but AI still needs us.
"PS In case anyone's worried that it used a lemma I hadn't heard of, I checked that the lemma was not a hallucination."
The post discusses how AI has accelerated research, but comments debate the actual impact of AI on mathematical research and raise concerns about the role of intent and purpose in intelligence.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3h
Peak period
32
Day 9
Avg / period
8.6
Based on 43 loaded comments
Key moments
- 01Story posted
Oct 31, 2025 at 8:32 PM EDT
2 months ago
Step 01 - 02First comment
Oct 31, 2025 at 11:05 PM EDT
3h after posting
Step 02 - 03Peak activity
32 comments in Day 9
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 11, 2025 at 3:29 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
"Get out an English thesaurus and recreate Mona Lisa in different words."
If you really want to be a cognitive maverick, you would encourage them to make up their own creole, syntax and semantics.
Still, the result is describing the same shared stable bubble of spacetime! But it's a grander feat than merely swapping words with others of the same relative meaning.
You totally missed the point of "put this in your own words" education. It was to make us aware we're just transpiling the same old ideas/semantics into different syntax.
Sure, it provides a nice biochemical bump; but it's not breaking new ground.
It also significantly changes my current job to something I didn't sign up to.
Or at least my school system tried to (Netherlands).
This didn’t fully come out of the blue. We have been told to expect the unexpected.
It absolutely did. Five years ago people would have told you that white collar jobs where mostly un-automatable and software engineering was especially safe due to the complexity.
> We have been told to expect the unexpected.
But this didn't.
What happened is unexpected. And we've been told to expect that.
I understand that that's very broad, but the older people teaching me had a sense of how fast technology was accelerating. They didn't have a tv back in the day. They knew that work would change fast and the demands of work would change fast.
The way I've been taught at school, it's to actually be that wary and cautious. You need to pivot, switch and upskill fast again.
What are humans better at than what AI isn't? So far, I've noticed it's being connected to other humans. So I'm currently at a job that pivots more towards that. I'm a data analyst + softwar engineer hybrid at the moment.
I'm not complaining to stop this. I'm sure it won't be stopped. I'm explaining why some people who work for a living don't like this technology.
I'm honestly not sure why others do. It pretty much doesn't matter what work you do for a living. If this technology can replace a non-negligible part of the white collar workforce it will have negative consequences for you. You don't have to like that just because you can't stop it.
To me it's like a halfway step toward management. When you start being a manager, you also start writing less code and having a lot more conversations.
I didn't want to get into management, because it's boring. Now I got forced into management and don't even get paid more.
That's certainly not the reason most HNers are giving - I'm seeing far more claims that LLMs are entirely meaningless becauzs either "they cannot make something they haven't seen before" or "half the time they hallucinate". The latter even appears as one of the first replies in this post's link, the X thread!
> _brief_ but enjoyable era
In the past this tradeoff probably was obvious: a farmer's individual fulfillment is less important than feeding a starving community.
I'm not so sure this tradeoff is obvious now. Will the increased productivity justify the loss of meaning and fulfillment that comes from robbing most jobs of autonomy and dignity? Will we become humans that have literally everything we need except the ability for self-actualization?
All of these seem to subscribe to "inevitability", have no issues that their research relies on a handful of oligarchs and that all of their thoughts and attempts are recorded and tracked on centralized servers.
I bet mathematical research hasn't sped up one bit due to "AI".
Whenever you start to prove new results, you get a lot of small lemme that are probably true but you need to check them and find a good constant which works with them.
Checking is by theorem provers and searching is by machines. You still need to figure out what you want to prove (which results are more important).
But rest can get automated away quite quickly.
Compiled databases and search engines have completely different capabilities than groups of people.
What is the broader context of OP trying to prove a theorem here? There are multiple layers of purpose and intent involved (so he can derive the satisfaction of proving a result, so he can keep publishing and keep his job, so their university department can be competitive, etc), but they all end up pointing at humans.
Computers aren’t going to be spinning in the background proving theorems just because. They will do so because humans intend for them to, in service of their own purposes.
In any discussion about AI surpassing humans in skills/intelligence, the chief concern should be in service of whom.
Tech leaders (ie the people controlling the computers on which the AIs run) like to say that this is for the benefit of all humanity, and that the rewards will be evenly distributed; but the rewards aren’t evenly distributed today, and the benefits are in the hands of a select few; why should that change at their hands?
If AI is successful to the extent which pundits predict/desire, it will likely be accompanied with an uprising of human workers that will make past uprisings (you know, the ones that banned child labor and gave us paid holidays) look like child’s play in comparison.
Well that's comforting.