Google Antigravity
Mood
skeptical
Sentiment
neutral
Category
tech
Key topics
Antigravity
Experimental Project
The 'Google Antigravity' project is a mysterious, content-less website that has sparked curiosity and skepticism among HN users, with the discussion centered around its purpose and potential meaning.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
N/A
Peak period
76
Hour 1
Avg / period
22.9
Based on 160 loaded comments
Key moments
- 01Story posted
11/18/2025, 3:47:38 PM
6h ago
Step 01 - 02First comment
11/18/2025, 3:47:38 PM
0s after posting
Step 02 - 03Peak activity
76 comments in Hour 1
Hottest window of the conversation
Step 03 - 04Latest activity
11/18/2025, 10:04:05 PM
2m ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Antigravity enables developers to operate at a higher, task-oriented level by managing agents across workspaces, while retaining a familiar AI IDE experience at its core. Agents operate across the editor, terminal, and browser, enabling them to autonomously plan and execute complex, end-to-end tasks elevating all aspects of software development.
via: https://www.linkedin.com/showcase/google-antigravity/about/
I've been using my current IDE for 17 years, and plan to continue using it for at least another 15
I wouldn't be even surprised if internally the AS team's financials are counted under the Playstore umbrella.
I still wouldn't trust a Google product to stick around, but these hints aren't a reliable oracle either.
It is a product launched in the hype cycle of AI. Google has plenty of other products launched during hype cycles that are gathering dust.
That's not a guaranteed signal but its something strong enough to be wary about.
https://www.youtube.com/watch?v=YX-OpeNZYI4
- It's VS Code
Like clockwork!
- ai therapist for your ai agents
- genetic ai agent container orchestration; only the best results survive so they have to fight for their virtual lives
- prompt engineering as a service
- social media post generator so you can post "what if ai fundamentally changes everything about how people interact with software" think pieces even faster
- vector db but actually json for some reason
(Edit: formatting)
AI Agent Orchestration Battle Bots. Old school VMs are the brute tanks just slowly ramming everybody off the field. A swarm of erratically behaving lightweight K8s Pods madly slicing and dicing everything coming on their way. Winner takes control of the host capacity.
I might need this in my life.
Presumably that hasn't changed much. If you want to do any large-scale edits of the UI you need to spin up a fork.
You mean Chromium wrapper?
Weirdly, out of all the vscode forks the best UI is probably bytedance's TRAE
> Come join us! Programming is fun again! It's a whole new world up here!
- Nano Banana => Mockup
- Antigravity/IDE => Comments/note
- Gemini => Turn to code
- Antigravity/IDE => Adjust/code
I'm not sure many engineers will welcome this "promotion".
If existing engineers don't change it doesn't matter because new engineers will take their place.
Car manufacturers made profit
Lotta people mining science fiction for cool names and then applying them to their crappy products, cheapening the source ideas.
Ah Google misconfigured their web server:
> Loading module from “https://antigravity.google/main-74LQFSAF.js” was blocked because of a disallowed MIME type (“text/html”).
But there is a 13 minute demo video.
> Model quota limit exceeded You have reached the quota limit for this model.
Would be willing to bet this is the issue. Adding html files to context for gemini models results in a ton of token use.
EDIT: why must users care?
Maybe the questioner is also in full control of the HTML creation and they don’t need a parser for all possible HTML edge cases.
It's the same problem with OpenRouter's free tiers for a long time. If something is truly $0 and widely available, people will absolutely bleed it dry.
> Fork VS Code, add a few workflow / management ideas on top.
> "Agentic development platform"
I'm Jack's depressed lack of surprise.
Please someone, make me feel something with software again.
Work with what you love, and you will never love anything again.
Unfortunately, once money came into the picture, quality, innovation, and anything resembling true progress flew out the window.
Trying to understand how this is anything net new in the space.
The software of the future, where nobody on staff knows how anything is built, no one understands why anything breaks, and cruft multiplies exponentially.
But at least we're not taken out of our flow!
And it's not like any of your criticisms don't apply to human teams. They also let cruft develop, are confused by breakages, and don't understand the code because everyone on the original team has since left for another company.
Nature does select for laziness. The laziest state that can outpace entropy in diverse ways? Ideal selection.
I do think the primary strengths of genai are more in comprehension and troubleshooting than generating code - so far. These activities play into the collaboration and communication narrative. I would not trust an AI to clean up cruft or refactor a codebase unsupervised. Even if it did an excellent job, who would really know?
I wish that were true.
In my experience, most of the time they're not doing the things you talk about -- major architectural decisions don't get documented anywhere, commit messages give no "why", and the people who the knowledge got socialized to in unrecorded conversations then left the company.
If anything, LLM's seem to be far more consistent in documenting the rationales for design decisions, leaving clear comments in code and commit messages, etc. if you ask them to.
Unfortunately, humans generally are not better at communicating about their process, in my experience. Most engineers I know enjoy writing code, and hate documenting what they're doing. Git and issue-tracking have helped somewhat, but it's still very often about the "what" and not the "why this way".
This is so far outside of common industry practices that I don't think your sentiment generalizes. Or perhaps your expectation of what should go in a single commit message is different from the rest of us...
LLMs, especially those with reasoning chains, are notoriously bad at explaining their thought process. This isn't vibes, it is empiricism: https://arxiv.org/abs/2305.04388
If you are genuinely working somewhere where the people around you are worse than LLMs at explaining and documenting their thought process, I would looking elsewhere. Can't imagine that is good for one's own development (or sanity).
I'm not really interested in what some academic paper has to say -- I use LLM's daily and see first-hand the quality of the documentation and explanations they produce.
I don't think there's any question that, as a general rule, LLM's do a much better job documenting what they're doing, and making it easy for people to read their code, with copious comments explaining what the code is doing and why. Engineers, on the other hand, have lots of competing priorities -- even when they want to document more, the thing needs to be shipped yesterday.
Your initial comment made it sound like you were commenting on a genuine apples-for-apples comparisons between humans and LLMs, in a controlled setting. That's the place for empiricism, and I think dismissing studies examining such situations is a mistake.
A good warning flag for why that is a mistake is the recent article that showed engineers estimated LLMs sped them up by like 24%, but when measured they were actually slower by 17%. One should always examine whether or not the specifics of the study really applies to them--there is no "end all be all" in empiricism--but when in doubt the scientific method is our primary tool for determing what is actually going on.
This is actually a cool use that's being explored more and more. I first saw it in the wiki thing from the devin people, and now google released one as well.
Doesn't this apply to people who code in high level languages?
This is more akin to manager-level view of the code (who need developers to go and look at the "deterministic" instructions); the abstraction is a lot lot more leaky than high->low level languages.
AI is just another abstraction that allows you to do 10x with 1x the effort. either you'll learn to master it or you'll become obsolete
I'm not saying AI is not a useful abstraction, but I am saying that it is not a trustworthy one.
:chuckles nervously:
The task was to put create a header, putting the company logo in the corner and the text in the middle.
The resulting CSS was an abomination - I threw it all away and rewrote it from scratch (using my somewhat anemic CSS knowledge), ending up with like 3 selectors with like 20 lines of styles in total.
This made me think that 1: CSS and the way we do UI sucks, I still don't get why don't we have a graphical editor that can at least do the simple stuff well. 2: when these model's don't wanna do what you want them to the way you want them, they really don't wanna.
I think AI has shown us there's a need for a new generation of simple to write software and libraries, where translating your intent into actual code is much simpler and the tools actually help you work instead of barely allowing to fight be all the accidental complexity.
We were much closer to this reality back in the 90s when you opened up a drag and drop UI editor (like VB6, Borland Delphi, Flash), wrote some glue code and out came an .exe that you could just give to people.
Nowadays I need a shell script that configures my typescript CDK template (with its own NPM repo), that deploys the backend infra (which is bundled via node), the database schema, compiles the frontend, and puts the code into the right places, and hope to god that I don't run into all sorts of weird security errors because I didn't configure the security the way the browser/AWS/security middleware wanted to.
It's important for people to feel like "hackers" that is the primary reason why command line sort of exploded among devs. Most devs will never admit this... they may not even realize it, but I think this is the main reason it went big.
The irony is that the very thing that makes devs feel like "hackers" is the very thing that's enabling agentic AI and making developers get all resistant because they're feeling dumber.
Antigravity: 5 syllables, i dont even want to say that out loud.
Anyway, I cannot actually get it to login to google no matter how many time I try the authentication and from different browsers. Does this happen with anyone else?
Google at its finest
I'm concerned that the new role of "manager of agents" (as google puts it) will be a soul destroying brain dead work and the morale won't be good.
Console error:
> Loading module from “https://antigravity.google/main-74LQFSAF.js” was blocked because of a disallowed MIME type (“text/html”).
I dont know what i expected tbh
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.