WorldGen – Text to Immersive 3D Worlds
Mood
excited
Sentiment
positive
Category
research
Key topics
Generative_ai
3d_world_generation
Meta
Ai_research
Discussion Activity
Active discussionFirst comment
21m
Peak period
18
Hour 1
Avg / period
4
Based on 68 loaded comments
Key moments
- 01Story posted
Nov 22, 2025 at 4:20 PM EST
1d ago
Step 01 - 02First comment
Nov 22, 2025 at 4:41 PM EST
21m after posting
Step 02 - 03Peak activity
18 comments in Hour 1
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 23, 2025 at 2:45 PM EST
12h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Still, it's a first effort. I do think AI can really help with world creation, which I think is one of the biggest barriers to the metaverse. When you see how much time and money it costs to create a small island world called GTA..
Then, let's say people are allowed to participate in a metaverse in which they have the ability to generate content with prompts. Does this mean they're only able to build things the model allows or supports? That seems very limiting for a metaverse.
Minecraft makes it easy by using big blocks but you can't have detail like that and it's very Lego like. VRChat requires very detailed Unity knowledge. You really need to be a developer for that.
Horizons has its own builder in world but it's kinda boring because it's limited. I think this is where AI can come in, to realise people's vision where they lack the skills to develop it themselves. As a helper tool, not the only means of generation.
But I suppose AI could in theory reach the point where it understand the story/theme and gameplay of a game while designing a world.
But when anyone can generate a huge open world, who really cares, is the same as it is now, gotta make something that sticks out from the crowd, something notable.
But it can be a tool for people with great imagination but not the technical skills to make it real.
Every time we talk about AI people think it will be used only as an easy mode A-Z creator. That's possible but creates boring output. I view it more as a tool to assist in the difficult and tedious parts of content creation. So the designer can focus on the experience and not tweaking the little things.
But, it looks like WorldGen has that slightly soulless art style they used for that Meta Zuckverse VR thing they tried for a while.
I have done this in the early GPT days with 'Tales of Maj'eyal' and to a lesser extent RimWorld.
It works great for games that have huge compendiums of world lore , bestiaries, etc.
You can explore, but is there a single interesting thing to find? Games have been playing with procgen forever and if we've learned anything it's that procgen systems are the least interesting part of any game which has them, they're at best just set-dressing for the content that isn't procgen.
https://www.challies.com/articles/no-mans-sky-and-10000-bowl...
I know progress happens in incremental steps, but this seems like quite the baby step from other world gen demos unless I’m missing something.
that's 95% of existing video games. How many doors actually work in a game like Cyberpunk?
on a different note , when do us mere mortals get to play with a worldgen engine? Google/meta/tencent have shown them off for awhile but without any real feasible way for a nobody to partake; are they that far away from actually being good?
The quality is currently not great and they are very hard to steer / work with in any meaningful way. You will see companies using the same demo scenes repeatedly because that's the one that looked cool and worked well.
There's also plenty of games with fully explorable environments, I think it's more of a scale and utility consideration. I can't think of what use I'd have for exploring an office complex in GTA other than to hear Rockstar's parodical office banter. But Morrowind had reason for it to exist in most contexts.
Other games have intrinsically explorable interiors like NMS, and Enshrouded. Elden Ring was pretty open in this regard as well. And Zelda. I'm sure there are many others. TES doesn't fall into this due to the way interiors are structured which is a door teleports you to an interior level, ostensibly to save on poly budget, which again, concerning scale is an important consideration in both terms of meaning and effort in-context.
This doesn't seem to be doing much to build upon that, I think we could procedurally scatter empty shell buildings with low-mid assets already with a pretty decent degree of efficiency?
I do think Meta has the tech to easily match other radiance field based generation methods, they publish many foundational papers in this space and have Hyperscape.
So I'd view this as an interesting orthogonal direction to explore!
These AI generated towns sure do seem to have strict building and civic codes. Everything on a grid, height limits, equal spacing between all buildings. The local historical society really has a tight grip on neighborhood character.
From the article:
> It would also be sound, with different areas connected in such a way to allow characters to roam freely without getting stuck.
Very unrealistic.
One of the interesting things about mostly-open world game environments, like GTA or Cyberpunk, is the "designed" messiness and the limits that result in dead ends. You poke at someplace and end up at a locked door (a texture that looks like a door but you can't interact with) that says there's absolutely nothing interesting beyond where you're at. No chance to get stuck in a dead end is boring; when every path leads to something interesting, there's no "exploration".
Other parts of Second Life have roadside motels. Each room has a bed, TV, bathroom, and maybe a coffee maker, all of which do something. One, with a 1950s theme, has a vibrating bed, which will make a buzzing sound if you pay it a tiny fee. Nobody uses those much.
No plot goes with all this. Unlike a game, the density of interesting events is low, closer to real life. This is the fundamental problem of virtual worlds. Realistic ones are boring.
Amusingly, Linden Lab has found a way to capitalize on this. They built a suburban housing subdivision, and people who buy a paid membership get an unfurnished house. This was so successful that there are now over 60,000 houses. There are themed areas and about a dozen house designs in each area. It's kind of banal, but seems to appeal to people for whom American suburbia is an unreachable aspiration. The American Dream, for about $10 a month.
People furnish their houses, have BBQs, and even mow their lawn. (You can buy simulated grass that needs regular mowing.)
So we have a good idea of the appeal of this.
Eve is a game about interstellar corporate fuckery where gigantic starships fling missiles and lasers at each other.
That... is not a recreation of real life.
Reminded me of this clip of Gabe Newell talking about fun, realism and reinforcement (behaviorism):
You must live in a different reality. The one I live in has fractal complexity and pretty much anywhere I look is filled with interesting ({cute..beautiful},{mildly surprising..WTF?!},{ah, that's an example of X..conundrum}) details. In fact, so far as I can tell, it's interesting details all the way down, all the way up, and all the way out in any direction I probe.
But that's the point! Daggerfall is like this too: huge areas (both cities and landscapes) with nothing interesting in them. That's what makes them feel so lived in. They're not worlds designed for the player to conquer, they're worlds that exist independent of the player, and the player is just one of a million characters in it.
The fact that I pass by 150 boring buildings in a city before I get to the one I care about both mirrors reality and makes the reward for finding the correct building all the greater!
I can see it being useful for isolated Unity developers with a concept and limited art ability. Currently they would be likely limited to pixel games.
What browser are you using? How is it even possible for a site to remove previous browser history in a tab?
An end-to-end _trained_ model that spits out a textured mesh of the same result would have been an innovation. The fact that they didn't do that suggests they're missing something fundamental for world model training.
The best thing I can say is that maybe they can use this to bootstrap a dataset for a future model.
Being kind to them and understanding the environment they work in won’t improve their lives, but it will expand our understanding of the capability of particular large companies to innovate.
10 years from now we might have games that generate entire worlds based on the unique story line that's customized for each playthrough. Maybe even endless stories.
Baldur's Gate 5 is going to be memorable!
The Elder's Scrolls could use this + Radiant AI for some neat quests when it improves.
Game studios are probably going to explore this in dungeon generators first where if things go wrong with the generation, not much is lost. Just exit and generate another.
I tried this VR headset from Meta the other day. It is so designed to throwing young people into digital realms by shutting off every single biological sense they have.
And let's not talk about the cultural flattening that this represents. A "medieval village" from where? When? Whom?
This is just slightly refined AI slop, but slop nevertheless.
But, having things feel strongly on a grid kind of ruins the feel. It's rare for every building to be isolated like that. I am guessing they had trouble producing neighboring buildings that looked like they could logically share a common wall or alleyway.
This is true for high-fidelity environments that people expect from AAA games or movie virtual sets, but it's not really true for the sort of content that Worldgen is producing. The effort required to learn low-poly 3D asset creation in Blender is definitely significant but it isn't out of many people's reach unless you have an especially low opinion of people. The Blender community makes assets like this stuff easily all the time.
Any world you can summon into existence with a few words is by the laws of information theory going to be generic. An interesting world requires thousands of words to describe.
I still won't even get myself a PlayStation, explicitly because I know if I did I would lose half year of my life to Red Dead. Who actually benefits from this technology, or is it just a cool demo?
(couldn't cleanup the link at all sorry)
[0]: https://scontent-lhr6-2.xx.fbcdn.net/v/t39.2365-6/586830145_...
DotA was effectively a simple map that changed online gaming, e-sports, and I am sure there are millions/billions of hours spent by players in a very simple looking landscape.
Compared to what we have today, on-demand, unique, and significantly better looking. It is amazing to see how relatively small these, objectively amazing, achievements seem, compared to a simple map we had 20 years ago.
One can imagine different ways to integrate this type of decomposed generation with different game engines or to parallelize it or allow lazy generation of assets. It's also very accessible to programmers like me who don't have the resources to train and host giant world models but are interested in AI world generation.
I assume that something like this is going to end up in Unity, Unreal and others within a matter of months.
And people are going to say that we already have enough crappy Unity asset games in the monopoly Steam store, but I think that misses the point. It's about opening up game creation or world generation as a creative outlet or tool for more people. It's not an attempt to create more refined games.
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.