Using AI to Perceive the Universe in Greater Depth
Posted4 months agoActive4 months ago
deepmind.googleSciencestory
calmmixed
Debate
60/100
AI in Scientific ResearchMachine Learning ApplicationsDeepmind
Key topics
AI in Scientific Research
Machine Learning Applications
Deepmind
The article discusses DeepMind's use of AI in scientific research, sparking a discussion on the differences between AI applications in research and consumer-focused areas, as well as the funding and priorities in the field.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
3h
Peak period
9
9-12h
Avg / period
3.7
Comment distribution22 data points
Loading chart...
Based on 22 loaded comments
Key moments
- 01Story posted
Sep 4, 2025 at 10:35 PM EDT
4 months ago
Step 01 - 02First comment
Sep 5, 2025 at 1:33 AM EDT
3h after posting
Step 02 - 03Peak activity
9 comments in 9-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 6, 2025 at 2:21 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45134489Type: storyLast synced: 11/20/2025, 3:32:02 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
To quote their purpose:
>The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.
While you may argue it is not intelligent, it is certainly AI, which is anything in the last 70 years utilizing a machine that could be considered an incremental steps towards simulating intelligence and learning.
This is "it's just an engineering problem, we just have to follow the roadmap", except the roadmap is illegible and the incremental steps noodle around and lead somewhere else.
No, this is "it's a science problem". All this:
> except the roadmap is illegible and the incremental steps noodle around and lead somewhere else.
is what makes it science rather than engineering.
From the outside though, it is tough to decide if somebody is doing proper science. Maybe they are just doing nonsense. Following a hunch or an intuition may look like nonsense from the outside, though.
Second, not sure what you are saying exactly, do you think "experiments in cold fusion in a test tube" are a step forward for science? Do you think a serious scientist would believe that?
As I said, playing science, and doing proper science, are two entirely different things, but hard to distinguish from the outside.
Leaving money out of it, my point is that they weren't doing fusion, they were doing fusion research. Their device was for fusion, but it was not a working fusion device. Similarly, the software of AI researchers is not working AI software, and they are not doing AI, apart from semantic shift where we call it AI now anyway and created the term AGI to replace the former meaning.
It's not correct to say that an experiment, with the intent of finding out how to do a thing, is equal to the goal. It's a step.
Calling it "incremental" is misleading since all steps are incremental, and assuming you're doggedly determined and exit blind alleys and circles, you will eventually arrive, if the destination exists. But "incremental" suggests you know the distance and know how far there is to go, or at least can put a bound on it, and know in some sense which way. Like the whole thing is planned.
So saying that AI "is anything in the last 70 years utilizing a machine that could be considered an incremental steps towards [AI]" is misleading, in both those ways. The process is not the goal, and the goal is not being approached at a known rate.
Several colleagues of mine have had to switch out of scientific machine learning as a discipline because the funding just isn't there anymore. All the money is in generic LLM research and generating pictures slightly better.
1 more comments available on Hacker News