Not

Hacker News!

Beta
Home
Jobs
Q&A
Startups
Trends
Users
Live
AI companion for Hacker News

Not

Hacker News!

Beta
Home
Jobs
Q&A
Startups
Trends
Users
Live
AI companion for Hacker News
  1. Home
  2. /Story
  3. /Nano Banana Pro
  1. Home
  2. /Story
  3. /Nano Banana Pro
Nov 20, 2025 at 10:04 AM EST

Nano Banana Pro

meetpateltech
1267 points
679 comments

Mood

excited

Sentiment

positive

Category

startup_launch

Key topics

Artificial Intelligence

Google

Machine Learning

Discussion Activity

Very active discussion

First comment

2m

Peak period

160

Day 1

Avg / period

160

Comment distribution160 data points
Loading chart...

Based on 160 loaded comments

Key moments

  1. 01Story posted

    Nov 20, 2025 at 10:04 AM EST

    3d ago

    Step 01
  2. 02First comment

    Nov 20, 2025 at 10:06 AM EST

    2m after posting

    Step 02
  3. 03Peak activity

    160 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    Nov 20, 2025 at 3:50 PM EST

    3d ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (679 comments)
Showing 160 comments of 679
varbhat
3d ago
2 replies
Can anyone please explain me the invisible watermarking mentioned in the said promo?
nickdonnelly
3d ago
4 replies
It's called Synth ID. It's a watermark that proves an image was generated by AI.

https://deepmind.google/models/synthid/

VladVladikoff
3d ago
1 reply
Super important for Google as a search engine so they can filter out and downrank AI generated results. However I expect there are many models out there which don’t do this, that everyone could use instead. So in the end a “feature” like this makes me less likely to use their model because I don’t know how Google will end up treating my blog post if I decide to include an AI generated or AI edited image.
Filligree
3d ago
1 reply
It’s required by EU regulations. Any public generator that doesn’t do it, is in violation of that unless it’s entirely inaccessible from the EU…

But of course there’s no way to enforce it on local generation.

Aloisius
3d ago
The EU didn't define any specific method of watermarking nor does it need to be tamper resistant. Even if they had specified it though, it's easy to remove watermarks like SynthID.
raincole
3d ago
1 reply
*by Google's AI.
zamadatix
3d ago
By anybody's AI using SynthID watermarking, not just Google's AI using SynthID watermarking (it looks like partnership is not open to just anyone though, you have to apply).
jsheard
3d ago
1 reply
In theory, at least. In practice maybe not.

https://i.imgur.com/WKckRmi.png

raincole
3d ago
1 reply
?

Google doesn't claim that Gemini would call SynthID detector at this point.

Edit: well they actually do. I guess it is not rolled out yet.

jsheard
3d ago
From the OP:

> Today, we are putting a powerful verification tool directly in consumers’ hands: you can now upload an image into the Gemini app and simply ask if it was generated by Google AI, thanks to SynthID technology. We are starting with images, but will expand to audio and video soon.

Re-rolling a few times got it to mention trying SynthID, but as a false negative, assuming it actually did the check and isn't just bullshitting.

> No Digital Watermark Detected: I was unable to detect any digital watermarks (such as Google's SynthID) that would definitively label it as being generated by a specific AI tool.

This would be a lot simpler if they just exposed the detector directly, but apparently the future is coaxing an LLM into doing a tool call and then second guessing whether it actually ran the tool.

airstrike
3d ago
So whoever creates AI content needs to voluntarily adopt this so that Google can sell "technology" for identifying said content?

Not sure how that makes any sense

KolmogorovComp
3d ago
Has anyone found out how to use Synth ID? If I want to if some images are AI, how can I do?
volkk
3d ago
1 reply
SynthID seems interesting but in classic Google fashion, I haven't a clue on how to use it and the only button that exists is join a waitlist. Apparently it's been out since 2023? Also, does SynthID work only within gemini ecosystem? If so, is this the beginning of a slew of these products with no one standard way? i.e "Have you run that image through tool1, tool2, tool3, and tool4 before deciding this image is legit?"

edit: apparently people have been able to remove these watermarks with a high success rate so already this feels like a DOA product

dragonwriter
3d ago
> SynthID seems interesting but in classic Google fashion, I haven't a clue on how to use it and the only button that exists is join a waitlist. Apparently it's been out since 2023? Also, does SynthID work only within gemini ecosystem? If so, is this the beginning of a slew of these products with no one standard way

No, its not the beginning, multiple different watermarking standards, watermark checking systems, and, of course, published countermeasures of various effectiveness for most of them, have been around for a while.

Razengan
3d ago
1 reply
Can Google Gemini 3 check Google Flights for live ticket prices yet?

(The Gemini 3 post has a million comments too many to ask this now)

jeffbee
3d ago
1 reply
https://gemini.google.com/share/19fed9993f06
Razengan
3d ago
Ah thanks, might have to make a throwaway account just for that.

Gemini 2 still goes "While I cannot check Google Flights directly, I can provide you with information based on current search results…" blah blah

hbn
3d ago
1 reply
I wouldn't trust any of the info in those images in the first carousel if I found them in the wild. It looks like AI image slop and I assume anyone who thinks those look good enough to share did not fact check any of the info and just prompted "make an image with a recipe for X"
matsemann
3d ago
1 reply
Yeah, the weird yellow tint, the kerning/fonts etc still immediately gives it away.

But I wouldn't mind being easily able to make infographics like these, I'd just like to supply the textual and factual content myself.

kccqzy
3d ago
I would do the same. But the reason for that is because I’m terrible at drawing and digital art, so I would need some help with the graphics in an infographics anyways. I don’t really need help with writing text or typesetting the text. I feel like if I were better at creating art I would not want AI involved at all.
fouronnes3
3d ago
5 replies
I guess the true endgame of AI products is naming them. We still have quite a way to go.
awillen
3d ago
1 reply
Honestly I give Google credit for realizing that they had something that people were talking about and running with it instead of just calling it gemini-image-large-with-text-pro
echelon
3d ago
They tried calling it gemini-2.5-whatever, but social media obsessed over the name "Nano Banana", which was just its codename that got teased on Twitter for a few weeks prior to launch.

After launch, Google's public branding for the product was "Gemini" until Google just decided to lean in and fully adopt the vastly more popular "Nano Banana" label.

The public named this product, not Google. Google's internal codename went virally popular and outstaged the official name.

Branding matters for distribution. When you install yourself into the public consciousness with a name, you'd better use the name. It's free distribution. You own human wetware market share for free. You're alive in the minds of the public.

Renaming things every human has brand recognition of, eg. HBO -> Max, is stupid. It doesn't matter if the name sucks. ChatGPT as a name sucks. But everyone in the world knows it.

This will forever be Nano Banana unless they deprecate the product.

timenotwasted
3d ago
1 reply
We just need a new AI for that.
riskable
3d ago
1 reply
Need a name for something? Try our new Mini Skibidi model!
gorbot
3d ago
Also introducing the amazing 6-7 pro model
b33j0r
3d ago
This has always been the hardest problem in computer science besides “Assume a lightweight J2EE distribution…”
mlmonkey
3d ago
There are only 2 hard problems in computer science: cache coherency, naming things and off by 1 errors...
jedberg
3d ago
I was at a tech conference yesterday, and I asked someone if they had tried nano banana. They looked at me like I was crazy. These names aren't helping! (But honestly I love it, easier to remember than Gemini-2.whatever.
guzik
3d ago
4 replies
Cool, but it's still unusable for me. Somehow all my prompts are violating the rules, huh?
Filligree
3d ago
1 reply
Can you give us an example?
guzik
3d ago
1 reply
'athlete wearing a health tracker under a fitted training top'

Failed to generate content: permission denied. Please try again.

raincole
3d ago
It's not the censorship safeguard. Permission denied means you need a paid API key to use it. It's confusing, I know.

If you triggered the safeguard it'll give you the typical "sorry, I can't..." LLM response.

mudkipdev
3d ago
1 reply
Are you asking it to recreate people?
guzik
3d ago
No, and no nudity, no reference images. Example: 'athlete wearing a health tracker under a fitted training top'
gdulli
3d ago
1 reply
In 25 years we'll reminisce on the times when we could find a human artist who wouldn't impose Google's or OpenAI's rules on their output.
guzik
3d ago
1 reply
the open-source models will catch up, 100%
raincole
3d ago
1 reply
Open models don't seem to be catching up the LLM-based image gen at this point.

ChatGPT's imagegen has been released for half a year but there isn't anything remotely similar to it in the open weight realm.

recursive
3d ago
Give it another 50 years. Or maybe 10. Or 5? But there's no way it won't catch up.
ASinclair
3d ago
Have some examples?
eminence32
3d ago
2 replies
> Generate better visuals with more accurate, legible text directly in the image in multiple languages

Assuming that this new model works as advertised, it's interesting to me that it took this long to get an image generation model that can reliably generate text. Why is text generation in images so hard?

Filligree
3d ago
It’s not necessarily harder than other aspects. However:

- It requires an AI that actually understands English, I.e. an LLM. Older, diffusion-only models were naturally terrible at that, because they weren’t trained on it.

- It requires the AI to make no mistakes on image rendering, and that’s a high bar. Mistakes in image generation are so common we have memes about it, and for all that hands generally work fine now, the rest of the picture is full of mistakes you can’t tell are mistakes. Entirely impossible with text.

Nano Banana Pro seems to somewhat reliably produce entire pictures without any mistakes at all.

tobr
3d ago
As a complete layman, it seems obvious that it should be hard? Like, text is a type of graphic that needs to be coherent both in its detail and its large structure, and there’s a very small amount of variation that we don’t immediately notice as strange or flat out incorrect. That’s not true of most types of imagery.
maliker
3d ago
1 reply
I wonder how hard it is to remove that SynthID watermark...

Looks like: "When tested on images marked with Google’s SynthID, the technique used in the example images above, Kassis says that UnMarker successfully removed 79 percent of watermarks." From https://spectrum.ieee.org/ai-watermark-remover

mudkipdev
3d ago
We know what it looks like at least https://www.reddit.com/r/nanobanana/comments/1o1tvbm/nano_ba...
willsmith72
3d ago
4 replies
> Starting to roll out in the Gemini API and Google AI Studio

> Rolling out globally in the Gemini app

wanna be any more vague? is it out or not? where? when?

koakuma-chan
3d ago
1 reply
I don't see in the ai studio
WawaFin
3d ago
I see it but when I use it says "Failed to count tokens, model not found: models/gemini-3-pro-image-preview. Please try again with a different model."
meetpateltech
3d ago
Currently, it’s rolling out in the Gemini app. When you use the “Create image” option, you’ll see a tooltip saying “Generating image with Nano Banana Pro.”

And in AI Studio, you need to connect a paid API key to use it:

https://aistudio.google.com/prompts/new_chat?model=gemini-3-...

> Nano Banana Pro is only available for paid-tier users. Link a paid API key to access higher rate limits, advanced features, and more.

Archonical
3d ago
Phased rollouts are fairly common in the industry.
ZeroCool2u
3d ago
Already available in the Gemini web app for me. I have the normal Pro subscription.
myth_drannon
3d ago
2 replies
Adobe's stock is down 50% from last year's peak. It's humbling and scary that entire industries with millions of jobs evaporate in a matter of few years.
riskable
3d ago
1 reply
On the contrary, it's encouraging to know that maliciously greedy companies like Adobe are getting screwed for being so malicious and greedy :thumbsup:

I had second thoughts about this comment, but if I stopped typing in the middle of it, I would've had to pay a cancellation fee.

creata
3d ago
1 reply
Adobe, for all their faults, can hardly be said to be more malicious or greedy than Google.

Adobe, at least, makes money by selling software. Google makes money by capturing eyeballs; only incidentally does anything they do benefit the user.

s1mon
3d ago
Adobe makes money by renting software, not selling it. There are many creatives that would disagree with your ranking of who is more malicious or greedy.
cj
3d ago
There's 2 takes here: First take is the AI is replacing jobs by making existing workforce more efficient.

The 2nd take is AI is costing companies so much money, that they need to cut workforce to pay for their AI investments.

I'm inclined to think the latter is represents what's happening more than the former.

theoldgreybeard
3d ago
18 replies
The interesting tidbit here is SynthID. While a good first step, it doesn't solve the problem of AI generated content NOT having any kind of watermark. So we can prove that something WITH the ID is AI generated but we can't prove that something without one ISN'T AI generated.

Like it would be nice if all photo and video generated by the big players would have some kind of standardized identifier on them - but now you're left with the bajillion other "grey market" models that won't give a damn about that.

morkalork
3d ago
2 replies
Labelling open source models as "grey market" is a heck of a presumption
bigfishrunning
3d ago
Every model is "grey market". They're all trained on data without complying with any licensing terms that may exist, be they proprietary or copyleft. Every major AI model is an instance of IP theft.
theoldgreybeard
3d ago
It's why I used "scare quotes".
slashdev
3d ago
2 replies
If there was a standardized identifier, there would be software dedicated to just removing it.

I don't see how it would defeat the cat and mouse game.

paulryanrogers
3d ago
1 reply
It doesn't have to be perfect to be helpful.

For example, it's trivial to post an advertisement without disclosure. Yet it's illegal, so large players mostly comply and harm is less likely on the whole.

slashdev
3d ago
You'd need a similar law around posting AI photos/videos without disclosure. Which maybe is where we're heading.

It still won't prevent it, but it would prevent large players from doing it.

aqme28
3d ago
6 replies
I don't think it will be easy to just remove it. It's built into the image and thus won't be the same every time.

Plus, any service good at reverse-image search (like Google) can basically apply that to determine whether they generated it.

There will always be a way to defeat anything, but I don't see why this won't work for like 90% of cases.

famouswaffles
3d ago
2 replies
It's an image. There's simply no way to add a watermark to an image that's both imperceptible to the user and non-trivial to remove. You'd have to pick one of those options.
aqme28
3d ago
1 reply
That is patently false.
flir
3d ago
1 reply
So, uh... do you know of an implementation that has both those properties? I'd be quite interested in that.
viraptor
3d ago
https://arxiv.org/html/2502.10465v1
fwip
3d ago
I'm not sure that's correct. I'm not an expert, but there's a lot of literature on digital watermarks that are robust to manipulation.

It may be easier if you have an oracle on your end to say "yes, this image has/does not have the watermark," which could be the case for some proposed implementations of an AI watermark. (Often the use-case for digital watermarks assumes that the watermarker keeps the evaluation tool secret - this lets them find, e.g, people who leak early screenings of movies.)

rcarr
3d ago
1 reply
You could probably just stick your image in another model or tool that didn't watermark and have it regenerate the image as accurately as possible.
pigpop
3d ago
Exactly, a diffusion model can denoise the watermark out of the image. If you wanted to be doubly sure you could add noise first and then denoise which should completely overwrite any encoded data. Those are trivial operations so it would be easy to create a tool or service explicitly for that purpose.
slashdev
3d ago
It would be like standardizing a captcha, you make a single target to defeat. Whether it is easy or hard is irrelevant.
VWWHFSfQ
3d ago
There will be a model trained to remove synthids from graphics generated by other models
flir
3d ago
> I don't think it will be easy to just remove it.

Always has been so far. You add noise until the signal gets swamped. In order to remain imperceptible it's a tiny signal, so it's easy to swamp.

dragonwriter
3d ago
> I don't think it will be easy to just remove it.

No, but model training technology is out in the open, so it will continue to be possible to train models and build model toolchains that just don't incorporate watermarking at all, which is what any motivated actor seeking to mislead will do; the only thing watermarking will do is train people to accept its absence as a sign of reliability, increasing the effectiveness of fakes by motivated bad actors.

echelon
3d ago
2 replies
This watermarking ceremony is useless.

We will always have local models. Eventually the Chinese will release a Nano Banana equivalent as open source.

simonw
3d ago
1 reply
Qwen-Image-Edit is pretty good already: https://simonwillison.net/2025/Aug/19/qwen-image-edit/
tezza
3d ago
Qwen won the latest models round last month…

https://generative-ai.review/2025/09/september-2025-image-ge... (non-pro Nano Banana)

dragonwriter
3d ago
> We will always have local models.

If watermarking becomes a legal mandate, it will inevitably include a prohibition on distributing (and using and maybe even possessing, but the distribution ban is the thing that will have the most impact, since it is the part that is most policable, and most people aren't going to be training their own models, except, of course, the most motivated bad actors) open models that do not include watermarking as a baked-in model feature. So, for most users, it'll be much less accessible (and, at the same time, it won't solve the problem.)

staplers
3d ago
1 reply

  have some kind of standardized identifier on them
Take this a step further and it'll be a personal identifying watermark (only the company can decode). Home printers already do this to some degree.
theoldgreybeard
3d ago
1 reply
yeah, personally identifying undetectable watermarks are kindof a terrifying prospect
overfeed
3d ago
It is terrifying, but inevitable. Perhaps AI companies flooding the commons with excrement wasn't the best idea, now we all have to suffer the consequences.
baby
3d ago
2 replies
It solves some problems! For example, if you want to run a camgirl website based on AI models and want to also prove that you're not exploiting real people
echelon
3d ago
1 reply
Your use case doesn't even make sense. What customers are clamoring for that feature? I doubt any paying customer in the market for (that product) cares. If the law cares, the law has tools to inquire.

All of this is trivially easy to circumvent ceremony.

Google is doing this to deflect litigation and to preserve their brand in the face of negative press.

They'll do this (1) as long as they're the market leader, (2) as long as there aren't dozens of other similar products - especially ones available as open source, (3) as long as the public is still freaked out / new to the idea anyone can make images and video of whatever, and (4) as long as the signing compute doesn't eat into the bottom line once everyone in the world has uniform access to the tech.

The idea here is that {law enforcement, lawyers, journalists} find a deep fake {illegal, porn, libelous, controversial} image and goes to Google to ask who made it. That only works for so long, if at all. Once everyone can do this and the lookup hit rates (or even inquiries) are < 0.01%, it'll go away.

It's really so you can tell journalists "we did our very best" so that they shut up and stop writing bad articles about "Google causing harm" and "Google enabling the bad guys".

We're just in the awkward phase where everyone is freaking out that you can make images of Trump wearing a bikini, Tim Cook saying he hates Apple and loves Samsung, or the South Park kids deep faking each other into silly circumstances. In ten years, this will be normal for everyone.

Writing the sentence "Dr. Phil eats a bagel" is no different than writing the prompt "Dr. Phil eats a bagel". The former has been easy to do for centuries and required the brain to do some work to visualize. Now we have tools that previsualize and get those ideas as pixels into the brain a little faster than ASCII/UTF-8 graphemes. At the end of the day, it's the same thing.

And you'll recall that various forms of written text - and indeed, speech itself - have been illegal in various times, places, and jurisdictions throughout history. You didn't insult Caesar, you didn't blaspheme the medieval church, and you don't libel in America today.

shevy-java
3d ago
2 replies
> What customers are clamoring for that feature? If the law cares, the law has tools to inquire.

How can they distinguish from real people exploited to AI models autogenerating everything?

I mean right now this is possible, largely because a lot of the AI videos have shortcomings. But imagine in 5 years from now on ...

krisoft
3d ago
> How can they distinguish from real people exploited to AI models autogenerating everything?

The people who care don't consume content which even just plausibly looks like real people exploited. They wouldn't consume the content even if you pinky promised that the exploited looking people are not real people. Even if you digitally signed that promise.

The people who don't care don't care.

dragonwriter
3d ago
> How can they distinguish from real people exploited to AI models autogenerating everything?

Watermarking by compliant models doesn't help this much because (1) models without watermarking exist and can continue to be developed (especially if absence of a watermark is treated as a sign of authenticity), so you cannot rely on AI fakery being watermarked, and (2) AI models can be used for video-to-video generation without changing much of the source, so you can't rely on something accurately watermarked as "AI-generated" not being based in actual exploitation.

Now, if the watermarking includes provenance information, and you require certain types of content to be watermarked not just as AI using a known watermarking system, but by a registered AI provider with regulated input data safety guardrails and/or retention requirements, and be traceable to a registered user, and...

Well, then it does something when it is present, largely by creating a new content gatekeepiing cartel.

dragonwriter
3d ago
> It solves some problems! For example, if you want to run a camgirl website based on AI models and want to also prove that you're not exploiting real people

So, you exploit real people, but run your images through a realtime AI video transformation model doing either a close-to-noop transformation or something like changing the background so that it can't be used to identify the actual location if people do figure out you are exploiting real people, and then you have your real exploitation watermarked as AI fakery.

I don't think this is solving a problem, unless you mean a problem for the would-be exploiter.

akersten
3d ago
8 replies
Some days it feels like I'm the only hacker left who doesn't want government mandated watermarking in creative tools. Were politicians 20 years ago as overreative they'd have demanded Photoshop leave a trace on anything it edited. The amount of moral panic is off the charts. It's still a computer, and we still shouldn't trust everything we see. The fundamentals haven't changed.
mlmonkey
3d ago
2 replies
You do know that every color copier comes with the ability to identify US currency and would refuse to copy it? And that every color printer leaves a pattern of faint yellow dots on every printout that uniquely identifies the printer?
potsandpans
3d ago
4 replies
And that's not a good thing.
fwip
3d ago
1 reply
Why not? Like, genuinely.
potsandpans
3d ago
I generally don't think that's it's good or just for a government to collude with manufacturers to track/trace it's citizens without consent or notice. And even if notice was given, I'd still be against it

The arguments put forward by people generally I don't find compelling -- for example, in this thread around protecting against counterfeit.

The "force" applied to address these concerns is totally out of proportion. Whenever these discussions happen, I feel like they descend into a general viewpoint, "if we could technically solve any possible crime, we should do everything in our power to solve it."

I'm against this viewpoint, and acknowledge that that means _some crime_ occurs. That's acceptable to me. I don't feel that society is correctly structured to "treat" crime appropriately, and technology has outpaced our ability to holistically address it.

Generally, I don't see (speaking for the US) the highest incarceration rate in the world to be a good thing, or being generally effective, and I don't believe that increasing that number will change outcomes.

oblio
3d ago
1 reply
It depends on how you're looking at it. For the people not getting handed counterfeit currency, it's probably a good thing.
fwip
3d ago
Also probably good for the people trying to counterfeit money with a printer, better not to end up in jail for that.
wing-_-nuts
3d ago
Nope, having a stable, trusted currency trumps whatever productive use one could have for a anonymous, currency reproducing color printer
mlmonkey
3d ago
I'm just responding to this by OP:

> Were politicians 20 years ago as overreative they'd have demanded Photoshop leave a trace on anything it edited.

sabatonfan
3d ago
1 reply
Is this something strictly with the US currency notes or is the same true for other countries currency as well?
SaberTail
3d ago
It's most notes, and for EU and US notes (as well as some others), it's based on a certain pattern on the bills: https://en.wikipedia.org/wiki/EURion_constellation
darkwater
3d ago
3 replies
> It's still a computer, and we still shouldn't trust everything we see. The fundamentals haven't changed.

I think that by now it should be crystal clear to everyone that it matters a lot the sheer scale a new technology permits for $nefarious_intent.

Knives (under a certain size) are not regulated. Guns are regulated in most countries. Atomic bombs are definitely regulated. They can all kill people if used badly, though.

When a photo was faked/composed with old tech, it was relatively easy to spot. With photoshop, it became more complicated to spot it but at the same time it wasn't easy to mass-produce altered images. Large models are changing the rules here as well.

hk__2
3d ago
2 replies
> Knives (under a certain size) are not regulated. Guns are regulated in most countries. Atomic bombs are definitely regulated

I don’t think this is a good comparison: knives are easy to produce, guns a bit harder, atomic bombs definitely harder. You should find something that is as easy to produce as a knife, but regulated.

wing-_-nuts
3d ago
1 reply
>You should find something that is as easy to produce as a knife, but regulated.

The DEA and ATF have entered the chat

withinboredom
3d ago
They can leave, plain water fits this bill.
darkwater
3d ago
The "product" to be regulated here is the LLM/model itself, not its output.

Or, if you see the altered photo as the "product", then the "product" of the knife/gun/bomb is the damage it creates to a human body.

csallen
3d ago
3 replies
I think we're overreacting. Digital fakes will proliferate, and we'll freak out bc it's new. But after a certain amount of time, we'll just get used to it and realize that the world goes on, and whatever major adverse effects actually aren't that difficult to deal with. Which is not the case with nuclear proliferation or things like that.

The story of human history is newer generations freaking about progress and novel changes that have never been seen before. And later generations being perfectly okay with it and adapting to a new style of life.

SV_BubbleTime
3d ago
It shouldn’t be that we panic about it and regulate the hell out.

We could use the opportunity to deploy robust systems of verification and validation to all digital works. One that allows for proving authenticity while respecting privacy if desired. For example… it’s insane in the US we revolve around a paper social security number that we know damn well isn’t unique. Or that it’s a massive pain in the ass for most people to even check the hash of a download.

Guess which we’ll do!

sebzim4500
3d ago
I think the long term effect will be that photos and videos no longer have any evidentiary value legally or socially, absent a trusted chain of custody.
darkwater
3d ago
In general I concur but the adaptation doesn't come out of the blue or just only because people get used to it but also because countermeasures are taken, regulations are written and adjustments are made to reduce the negative impact. Also the hyperconnected society is still relatively new and I'm not sure we have adapted for it yet.
commandlinefan
3d ago
> a new technology permits for $nefarious_intent

But people with actual nefarious intent will easily be able to remove these watermarks, however they're implemented. This is copy protection and key escrow all over again - it hurts honest people and doesn't even slow down bad people.

mh-
3d ago
1 reply
Politicians absolutely were doing this 20-30 years ago. Plenty of folks here are old enough to remember debates on Slashdot around the Communications Decency Act, Child Online Protection Act, Children's Online Privacy Protection Act, Children's Internet Protection Act, et al.

https://en.wikipedia.org/wiki/Communications_Decency_Act

SV_BubbleTime
3d ago
It’s annoying how effective “for the children” is. That peiole really just turn off their brains for that.
BeetleB
3d ago
1 reply
Easy to say until it impacts you in a bad way:

https://www.nbcnews.com/tech/tech-news/ai-generated-evidence...

> “My wife and I have been together for over 30 years, and she has my voice everywhere,” Schlegel said. “She could easily clone my voice on free or inexpensive software to create a threatening message that sounds like it’s from me and walk into any courthouse around the country with that recording.”

> “The judge will sign that restraining order. They will sign every single time,” said Schlegel, referring to the hypothetical recording. “So you lose your cat, dog, guns, house, you lose everything.”

At the moment, the only alternative is courts simply never accept photo/video/audio as evidence. I know if I were a juror I wouldn't.

At the same time, yeah, watermarks won't work. Sure, Google can add a watermark/fingerprint that is impossible to remove, but there will be tools that won't put such watermarks/fingerprints.

mkehrt
3d ago
1 reply
Testimony is evidence. I don't think most cases have any physical evidence.
BeetleB
3d ago
A lot of cases rely heavily on security camera footage.
llbbdd
3d ago
Unless they've recently changed it, Photoshop will actually refuse to open or edit images of at least US banknotes.
Der_Einzige
3d ago
HN is full of authoritarian bootlickers who can't imagine that people can exist without a paternalistic force to keep them from doing bad things.
rcruzeiro
3d ago
Try photocopying some US dollar bills.
dpark
3d ago
I suspect watermarking ends up being a net negative, as people learn to trust that lack of a watermark indicates authenticity. Propaganda won’t have the watermark.
mortenjorck
3d ago
1 reply
Reminder that even in the hypothetical world where every AI image is digitally watermarked, and all cameras have a TPM that writes a hash of every photo to the blockchain, there’s nothing to stop you from pointing that perfectly-verified camera at a screen showing your perfectly-watermarked AI image and taking a picture.

Image verification has never been easy. People have been airbrushed out of and pasted into photos for over a century; AI just makes it easier and more accessible. Expecting a “click to verify” workflow is unreasonable as it has ever been; only media literacy and a bit of legwork can accomplish this task.

fwip
3d ago
1 reply
Competent digital watermarks usually survive the 'analog hole'. Screen-cam resistant watermarks have been in use since at least 2020, and if memory serves, back to 2010 when I first starting reading about them, but I don't recall what it was called back then.
simonw
3d ago
I just tried asking Gemini about a photo I took of my screen showing an image I edited with Nano Banana Pro... and it said "All or part of the content was generated with Google AI. SynthID detected in less than 25% of the image".

Photo-of-a-screen: https://gemini.google.com/share/ab587bdcd03e

It reported 25-50% for the image without having been through that analog hole: https://gemini.google.com/share/022e486fd6bf

losvedir
3d ago
2 replies
I'm sure Apple will roll something out in the coming years. Now that just anyone can easily AI themselves into a picture in front of the Eiffel tower, they'll want a feature that will let their users prove that they _really_ took that photo in front of the Eiffel tower (since to a lot of people sharing that you're on a Paris vacation is the point, more than the particular photo).

I bet it will be called "Real Photos" or something like that, and the pictures will be signed by the camera hardware. Then iMessage will put a special border around it or something, so that when people share the photos with other Apple users they can prove that it was a real photo taken with their phone's camera.

panarky
3d ago
1 reply
> a real photo taken with their phone's camera

How "real" are iPhone photos? They're also computationally generated, not just the light that came through the lens.

Even without any other post-processing, iPhones generate gibberish text when attempting to sharpen blurry images, they delete actual textures and replace them with smooth, smeared surfaces that look like a watercolor or oil paintings, and combine data from multiple frames to give dogs five legs.

wyre
3d ago
Don’t be a pedant. You know very well there is a big different between a photo taken on an iPhone and a photo edited with Nano Banana.
pigpop
3d ago
Does anyone other than you actually care about your vacation photos?

There used to be a joke about people who did slideshows (on an actual slide projector) of their vacation photos at parties.

NoMoreNicksLeft
3d ago
I don't believe that you can do this for photography. For AI-images, if the embedded data has enough information (model identification and random seed), one can prove that it was AI by recreating it on the fly and comparing. How do you prove that a photographic image was created by a CCD? If your AI-generated image were good enough to pass, then hacking hardware (or stealing some crypto key to sign it) would "prove" that it was a real photograph.

Hell, it might even be possible for some arbitrary photographs to come up with an AI prompt that produces them or something similar enough to be indistinguishable to the human eye, opening up the possibility of "proving" something is fake even when it was actually real.

What you want just can't work, not even from a theoretical or practical standpoint, let alone the other concerns mentioned in this thread.

markdog12
3d ago
I asked Gemini "dymamic view" how SynthID works: https://gemini.google.com/share/62fb0eb38e6b
xnx
3d ago
SynthID has been in use for over 2 years.
lazide
3d ago
It solves a real problem - if you have something sketchy, the big players can repudiate it, the authorities can more formally define the black market, and we can have a ‘war on deepfakes’ to further enable the authorities in their attempts to control the narratives.
swatcoder
3d ago
The incentive for commercial providers to apply watermarks is so that they can safely route and classify generated content when it gets piped back in as training or reference data from the wild. That it's something that some users want is mostly secondary, although it is something they can earn some social credit for by advertising.

You're right that there will existed generated content without these watermarks, but you can bet that all the commercial providers burning $$$$ on state of the art models will gradually coalesce around some means of widespread by-default/non-optional watermarking for content they let the public generate so that they can all avoid drowning in their own filth.

vunderba
3d ago
Regardless of how you feel about this kind of steganography, it seems clear that outside of a courtroom, deepfakes still have the potential to do massive damage.

Unless the watermark randomly replaces objects in the scene with bananas, these images/videos will still spread like wildfire on platforms like TikTok, where the average netizen's idea of due diligence is checking for a six‑fingered hand... at best.

DenisM
3d ago
It would be more productive for camera manufacturers to embed a per-device digital signature. Those care to prove their image is genuine could publish both pre and post processed images for transparency.
domoritz
3d ago
I don't understand why there isn't an obvious, visible watermark at all. Yes, one could remove it but let's assume 95% of people don't bother removing the visible watermark. It would really help with seeing instantly when an image was AI generated.
gigel82
3d ago
We need to be super careful with how legislation around this is passed and implemented. As it currently stands, I can totally see this as a backdoor to surveillance and government overreach.

If social media platforms are required by law to categorize content as AI generated, this means they need to check with the public "AI generation" providers. And since there is no agreed upon (public) standard for imperceptible watermarks hashing that means the content (image, video, audio) in its entirety needs to be uploaded to the various providers to check if it's AI generated.

Yes, it sounds crazy, but that's the plan; imagine every image you post on Facebook/X/Reddit/Whatsapp/whatever gets uploaded to Google / Microsoft / OpenAI / UnnamedGovernmentEntity / etc. to "check if it's AI". That's what the current law in Korea and the upcoming laws in California and EU (for August 2026) require :(

zaidf
3d ago
This is what C2PA is trying to do: https://c2pa.org/
dangoodmanUT
3d ago
5 replies
I've had nano banana pro for a few weeks now, and it's the most impressive AI model I've ever seen

The inline verification of images following the prompt is awesome, and you can do some _amazing_ stuff with it.

It's probably not as fun anymore though (in the early access program, it doesn't have censoring!)

echelon
3d ago
3 replies
LLMs might be a dead end, but we're going to have amazing images, video, and 3D.

To me the AI revolution is making visual media (and music) catch up with the text-based revolution we've had since the dawn of computing.

Computers accelerated typing and text almost immediately, but we've had really crude tools for images, video, and 3D despite graphics and image processing algorithms.

AI really pushes the envelope here.

I think images/media alone could save AI from "the bubble" as these tools enable everyone to make incredible content if you put the work into it.

Everyone now has the ingredients of Pixar and a music production studio in their hands. You just need to learn the tools and put the hours in and you can make chart-topping songs and Hollywood grade VFX. The models won't get you there by themselves, but using them in conjunction with other tools and understanding as to what makes good art - that can and will do it.

Screw ChatGPT, Claude, Gemini, and the rest. This is the exciting part of AI.

dangoodmanUT
3d ago
1 reply
I wouldn’t call LLMs a dead end, they’re so useful as-is
echelon
3d ago
1 reply
LLMs are useful, but they've hit a wall on the path to automating our jobs. Benchmark scores are just getting better at test taking. I don't see them replacing software engineers without overcoming obstacles.

AI for images, video, music - these tools can already make movies, games, and music today with just a little bit of effort by domain experts. They're 10,000x time and cost savers. The models and tools are continuing to get better on an obvious trend line.

atonse
3d ago
I'm literally a software engineer, and a business owner. I don't think about this in binary terms (replacement or not), but just like CMS's replaced the jobs of people that write HTML by hand to build websites, I think whole classes of software development will get democratized.

For example, I'm currently vibe coding an app that will be specific to our company, that helps me run all the aspects of our business and integrates with our systems (so it'll integrate with quickbooks for invoicing, etc), and help us track whether we have the right insurance across multiple contracts, will remind me about contract deadlines coming up, etc.

It's going to combine the information that's currently in about 10 different slightly out of sync spreadsheets, about 2 dozen google docs/drive files, and multiple external systems (Gusto, Quickbooks, email, etc).

Even though I could build all this manually (as a software developer), I'd never take the time to do it, because it takes away from client work. But now I can actually do it because the pace is 100x faster, and in the background while I'm doing client work.

Sevii
3d ago
How can LLMs be a dead end? The last improvement in LLMs came out this week.
dyauspitr
3d ago
Doesn’t seem like a dead end at all. Once we can apply LLMs to the physical world and its outputs control robot movements it’s essentially game over for 90% of the things humans do, AGI or not.
refulgentis
3d ago
1 reply
"Inline verification of images following the prompt is awesome, and you can do some _amazing_ stuff with it." - could you elaborate on this? sounds fascinating but I couldn't grok it via the blog post (like, it this synthid?)
dangoodmanUT
3d ago
It uses Gemini 3 inline with the reasoning to make sure it followed the instructions before giving you the output image
vunderba
3d ago
I'd be curious about how well the inline verification works - an easy example is to have it generate a 9-pointed star, a classic example that many SOTA models have difficulties with.

In the past, I've deliberately stuck a Vision-language model in a REPL with a loop running against generative models to try to have it verify/try again because of this exact issue.

EDIT: Just tested it in Gemini - it either didn't use a VLM to actually look at the finished image or the VLM itself failed.

Output:

  I have finished cross-referencing the image against the user's specific requests. The primary focus was on confirming that the number of points on the star precisely matched the requested nine. I observed a clear visual representation of a gold-colored star with the exact point count that the user specified, confirming a complete and precise match.

Result:

  Bog standard star with *TEN POINTS*.
bn-l
3d ago
How did you get early access?!
spaceman_2020
3d ago
Genuinely believe that images are 99.5% solved now and unless you’re extremely keen eyed, you won’t be able to tell AI images from real images now
ZeroCool2u
3d ago
3 replies
I tried the studio ghibli prompt on a photo my me and my wife in Japan and it was... not good. It looked more like a hand drawn sketch made with colored pencils, but none of the colors were correct. Everything was a weird shade of yellow/brown.

This has been an oddly difficult benchmark for Gemini's NB models. Googles images models have always been pretty bad at the studio ghibli prompt, but I'm shocked at how poorly it performs at this task still.

jeffbee
3d ago
1 reply
I wonder ... do you think they might not be chasing that particular metric?
ZeroCool2u
3d ago
Sure! But it's weird how far off it is in terms of capability.
xnx
3d ago
1 reply
You might try it again with style transfer: 1 image of style to apply to 1 target image
ZeroCool2u
3d ago
This is a good idea, will give it a try!
skocznymroczny
3d ago
Could be they are specifically training against it. There was some controversy about "studio ghibli style". Similarly how in the early days of Stable Diffusion "Greg Rutkowski style" was a very popular prompt to get a specific look. These days modern Stable Diffusion based models like SD 3 or FLUX mostly removed references to specific artists from their datasets.
Shalomboy
3d ago
1 reply
The SynthID check for fishy photos is a step in the right direction, but without tighter integration into everyday tooling its not going to move the needle much. Like when I hold the power button on my Pixel 9, It would be great if it could identify synthetic images on the screen before I think to ask about it. For what its worth it would be great if the power button shortcut on Pixel did a lot more things.
Deathmax
3d ago
You sort of can on Android, but it's a few steps:

1. Trigger Circle to Search with long holding the home button/bar

2. Select the image

3. Navigate to About this image on the Google search top bar all the way to the right - check if it says "Made by Google AI" - which means it detected the SynthID watermark.

scottlamb
3d ago
The rollout doesn't seem to have reached my userid yet. How successful are people at getting these things to actually produce useful images? I was trying recently with the (non-Pro) Nano Banana to see what the fuss was about. As a test case, I tried to get it to make a diagram of a zipper merge (in driving), using numbered arrows to indicate what the first, second, third, etc. cars should do.

I had trouble reliably getting it to...

* produce just two lanes of traffic

* have all the cars facing the same way—sometimes even within one lane they'd be facing in opposite directions.

* contain the construction within the blocked-off area. I think similarly it wouldn't understand which side was supposed to be blocked off. It'd also put the lane closure sign in lanes that were supposed to be open.

* have the cars be in proportion to the lane and road instead of two side-by-side within a lane.

* have the arrows go in the correct direction instead of veering into the shoulder or U-turning back into oncoming traffic

* use each number once, much less on the correct car

This is consistent with my understanding of how LLMs work, but I don't understand how you can "visualize real-time information like weather or sports" accurately with these failings.

Below is one of the prompts I tried to go from scratch to an image:

> You are an illustrator for a drivers' education handbook. You are an expert on US road signage and traffic laws. We need to prepare a diagram of a "zipper merge". It should clearly show what drivers are expected to do, without distracting elements.

> First, draw two lanes representing a single direction of travel from the bottom to the top of the image (not an entire two-way road), with a dotted white line dividing them. Make sure there's enough space for the several car-lengths approaching a construction site. Include only the illustration; no title or legend.

> Add the construction in the right lane only near the top (far side). It should have the correct signage for lane closure and merging to the left as drivers approach a demolished section. The left lane should be clear. The sign should be in the closed lane or right shoulder.

> Add cars in the unclosed sections of the road. Each car should be almost as wide as its lane.

> Add numbered arrows #1–#5 indicating the next cars to pass to the left of the "lane closed" sign. They should be in the direction the cars will move: from the bottom of the illustration to the top. One car should proceed straight in the left lane, then one should merge from the right to the left (indicate this with a curved arrow), another should proceed straight in the left, another should merge, and so on.

I did have a bit better luck starting from a simple image and adding an element to it with each prompt. But on the other hand, when I did that it wouldn't do as well at keeping space for things. And sometimes it just didn't make any changes to the image at all. A lot of dead ends.

I also tried sketching myself and having it change the illustration style. But it didn't do it completely. It turned some of my boxes into cars but not necessarily all of them. It drew a "proper" lane divider over my thin dotted line but still kept the original line. etc.

saretup
3d ago
Interesting they didn’t post any benchmark results - lmarena/artificial analysis etc. I would’ve thought they’d be testing it behind the scenes the same way they did with Gemini 3.
wnevets
3d ago
does it handle transparency yet?
jpadkins
3d ago
really missed an opportunity to name it micro banana (or milli banana). Personally I can't wait for mega banana next year.
meetpateltech
3d ago
Developer Blog: https://blog.google/technology/developers/gemini-3-pro-image...

DeepMind Page: https://deepmind.google/models/gemini-image/pro/

Model Card: https://storage.googleapis.com/deepmind-media/Model-Cards/Ge...

SynthID in Gemini: https://blog.google/technology/ai/ai-image-verification-gemi...

519 more comments available on Hacker News

View full discussion on Hacker News
ID: 45993296Type: storyLast synced: 11/24/2025, 7:00:33 AM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.

Read ArticleView on HN

Not

Hacker News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Jobs radar
  • Tech pulse
  • Startups
  • Trends

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.