Skyfall-Gs – Synthesizing Immersive 3d Urban Scenes From Satellite Imagery
Posted2 months agoActiveabout 2 months ago
skyfall-gs.jayinnn.devTechstory
excitedpositive
Debate
40/100
3d ReconstructionSatellite ImageryComputer Vision
Key topics
3d Reconstruction
Satellite Imagery
Computer Vision
The Skyfall-GS project generates immersive 3D urban scenes from satellite imagery, sparking discussion on its potential applications and limitations.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
2h
Peak period
13
4-6h
Avg / period
3
Comment distribution36 data points
Loading chart...
Based on 36 loaded comments
Key moments
- 01Story posted
Nov 3, 2025 at 8:46 AM EST
2 months ago
Step 01 - 02First comment
Nov 3, 2025 at 10:30 AM EST
2h after posting
Step 02 - 03Peak activity
13 comments in 4-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 4, 2025 at 12:50 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45798881Type: storyLast synced: 11/20/2025, 3:29:00 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
In fact you wouldn't even need to be limited to earth. Why not throw in Google Moon and steal a moon buggy while shooting scientific rovers and doing cool flips out of craters?
I wouldn't knock the research. The results look impressive to me.
GIS won't want generative hallucinations.
Consumer mapping apps, social applications, and games (eg. flight sims) will want the maps to look as good as possible.
True. If you bring the viewpoint down to near street level and look horizontally, it's worse than traditional photogrammetry methods.
I've been looking for algorithms like this for representing distant regions in virtual worlds. Open Drone Map can do a good job, sometimes, but it really needs a cleanup pass.
It would be amazing if they could also take user-generated photos and videos at ground level and accurate mapping data (that has building outlines) and clean that up to something presentable.
I mean, what they do here is what google and apple are already doing for years. It's time for the next step.
This is gaussian splatting. I'm pretty confident that google/apple have not done that.
These sort of projects always look cool but I think the real "wow factor" would be a file upload where you can see the result on your image. I assume there are reasons why this isn't done.
This is where we were heading with our 3D volumetric video company https://ayvri.com
We were working on blending 3D satellite imagery with your ground view (or low flying in the case of paragliders) photos and videos to create a 3D scene.
Our technology was acquired prior to us being able to fully realize the vision (and we moved on to another project).
https://www.smithsonianmag.com/air-space-magazine/flight-box...
I knew their name because, when I worked for an Airbus subsidiary, we talked with them about a solution to generate 3D environments for any/every airport.
They had some cool stuff but also some wonky stuff at the time (like highway overpasses actually being rendered as walls across the highway).
And also, was it any different for MSFS 2020?
I suspect hybrid solutions will remove the limitations of GS, with (eventually...) some smooth hand off. Do clean-enough GS like this; then hand the output to other systems which covert into forms more useful for your application and which adopt e.g. textures from localized photos etc.
It's just a bit of engineering and compute...