Deep Researcher with Test-Time Diffusion
Posted3 months agoActive3 months ago
research.googleTechstory
calmpositive
Debate
40/100
AI ResearchDiffusion ModelsNatural Language Processing
Key topics
AI Research
Diffusion Models
Natural Language Processing
Google researchers introduce 'Deep Researcher' that uses test-time diffusion to improve output quality, sparking discussion on its methodology and potential applications.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
4d
Peak period
11
90-96h
Avg / period
6.5
Comment distribution13 data points
Loading chart...
Based on 13 loaded comments
Key moments
- 01Story posted
Sep 20, 2025 at 12:26 PM EDT
3 months ago
Step 01 - 02First comment
Sep 24, 2025 at 6:51 AM EDT
4d after posting
Step 02 - 03Peak activity
11 comments in 90-96h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 24, 2025 at 2:59 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45314752Type: storyLast synced: 11/20/2025, 1:39:00 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I’d like to try it, but I just learned I need and Enterprise Agentic subscription of some sort from Google; no idea how much that costs.
That said, this seems like a real abuse of the term diffusion, as far as I can tell. I don’t think this thing is reversing any entropy on any latent space.
But I don't see how this Deep Researcher actually uses diffusion at all. So it seems wrong to say "test-time diffusion" just because you liken an early text draft with noise in a diffusion model, then use RAG to retrieve a potential polished version of said text draft?