Worldgrow: Generating Infinite 3d World
Posted2 months agoActive2 months ago
github.comTechstory
calmmixed
Debate
60/100
Procedural Generation3d World GenerationAI ResearchGame Development
Key topics
Procedural Generation
3d World Generation
AI Research
Game Development
The WorldGrow project generates infinite 3D worlds using AI, sparking discussions on its potential applications, limitations, and comparisons to traditional procedural generation methods.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
22
2-4h
Avg / period
6.3
Comment distribution50 data points
Loading chart...
Based on 50 loaded comments
Key moments
- 01Story posted
Oct 27, 2025 at 5:31 AM EDT
2 months ago
Step 01 - 02First comment
Oct 27, 2025 at 6:36 AM EDT
1h after posting
Step 02 - 03Peak activity
22 comments in 2-4h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 28, 2025 at 12:07 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45718908Type: storyLast synced: 11/20/2025, 3:47:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> The code is being prepared for public release; pretrained weights and full training/inference pipelines are planned.
Any ideas of how it would different and better compared to "traditional" PCG? Seems like it'd give you more resource consumption, worse results and less control, neither of which seem like a benefit.
> We tackle the challenge of generating the infinitely extendable 3D world — large, continuous environments with coherent geometry and realistic appearance. Existing methods face key challenges: 2D-lifting approaches suffer from geometric and appearance inconsistencies across views, 3D implicit representations are hard to scale up, and current 3D foundation models are mostly object-centric, limiting their applicability to scene-level generation. Our key insight is leveraging strong generation priors from pre-trained 3D models for structured scene block generation. To this end, we propose WorldGrow, a hierarchical framework for unbounded 3D scene synthesis. Our method features three core components: (1) a data curation pipeline that extracts high-quality scene blocks for training, making the 3D structured latent representations suitable for scene generation; (2) a 3D block inpainting mechanism that enables context-aware scene extension; and (3) a coarse-to-fine generation strategy that ensures both global layout plausibility and local geometric/textural fidelity. Evaluated on the large-scale 3D-FRONT dataset, WorldGrow achieves SOTA performance in geometry reconstruction, while uniquely supporting infinite scene generation with photorealistic and structurally consistent outputs. These results highlight its capability for constructing large-scale virtual environments and potential for building future world models.
It's about generating interesting virtual space!
I know 'interesting' is subjective, but your comment is demonstrably false. Just type "mario 64 staircase" into youtube, and look at the hundreds (thousands? millions?) of videos and many millions of views.
Redefining “interesting” just so you can provide a completely irrelevant “correction” is bad faith trolling.
There's no secret formula to culture. Some programmers and AI people seem to think there is some magic AI model that will be able to produce cultural hits at the click of a button. If you're a boring person, you're not likely to "get" why something is interesting, or why that part can't just be automated away. No technology can help with that.
Minecraft is of course the poster child for very large worlds of interest these days.
Dwarf Fortress crafts an entire continent complete with a multi-century history, the results of which you can explore freely in adventure mode.
Most of the recent examples of 3D worlds like the post tend to do it through wave function collapse.
Minecraft used to create very interesting worlds until they changed the algorithm and the landscapes became plain and boring. It took them about 10 year until the Caves and Cliffs Update to make the world generation interesting again.
Consider the patterns generated by cellular automata.
Both tend to stay interesting in the small scale but lose it to boring chaos in the large.
For this reason I think the better approach is to start with a simple level-scale form and then refine it into smaller parts, and then to refine those parts and so on.
(Vs plugging away at tunnel-building like a mole)
I think that's a good way to put it. I started writing a reply before reading your comment entirely and arrived at basically the same conclusion as this but more verbosely:
> For this reason I think the better approach is to start with a simple level-scale form and then refine it into smaller parts, and then to refine those parts and so on.
It seems hard to get away from having some sort of overarching goal, and then constantly looking back at it. At progressively smaller levels. Like what is the universe of the thing you are generating randomly. Is it a dungeon in a roguelike? It it meant to be one of many floors? Or is it a space inside a building? Is it a house? Is it an office? Is the office a stand alone building or a sky scraper?
Perhaps a good algorithm would start big and go small.
Or maybe instead of looking back you could pre-divide into zones.But then, if you want to make an entire universe (as in multiple worlds), you need to just make random worlds which leads to your original problem (boring chaos at large scale) like this or go up another level to more intelligently generate.
Point being, you need some sort of top down perspective on it.
http://fleen.org
https://www.flickr.com/photos/jonathanmccabe/albums/72157622...
The levels are made to fit under 80x24 terminal with maybe a max of 7/8? -can't remember- rooms per level.
The worlds from Cataclysm DDA:Bright Nights are pretty regular, and you have an overworld, labs, subways...
[1] https://www.challies.com/articles/no-mans-sky-and-10000-bowl...
Once you build a base or create some goal for yourself, it becomes interesting.
Maybe the idea is to create environments for AI robotics traini ng.
And Valve I think used to have a series on level design, involving from big to small and "anchor points", but I seem to have misplaced the link.
I've dreamed of a NeRF-powered backrooms walking simulator for quite a while now. This approach is "worse" because the mesh seems explicit rather than just the world becoming what you look at, but that's arguably better for real-world use cases of course.
True, it sounds (and looks) a lot like https://scp-wiki.wikidot.com/scp-3008
Their block-by-block generation method seems to be too local in its considerations, where each 3x3 section (= the ones generated based on the immediate neighbors) looks a lot more coherent than the 4x4 sections and above. I think it might need to be extended to be less local and might also in general need to be paired with some sort of guidance systems (e.g. in the office example would generate the overall floor layout).
[1]: https://www.youtube.com/watch?v=7ffT_8wViBA