Back to Home11/13/2025, 5:39:13 PM

Nano Banana can be prompt engineered for nuanced AI image generation

869 points
233 comments

Mood

excited

Sentiment

positive

Category

tech

Key topics

AI image generation

prompt engineering

Nano Banana

Debate intensity80/100

The 'Nano Banana' can be used as a prompt engineering technique for nuanced AI image generation, allowing for more specific and detailed outputs.

Snapshot generated from the HN discussion

Discussion Activity

Very active discussion

First comment

38m

Peak period

151

Day 1

Avg / period

80

Comment distribution160 data points

Based on 160 loaded comments

Key moments

  1. 01Story posted

    11/13/2025, 5:39:13 PM

    5d ago

    Step 01
  2. 02First comment

    11/13/2025, 6:16:56 PM

    38m after posting

    Step 02
  3. 03Peak activity

    151 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    11/15/2025, 12:26:08 AM

    4d ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (233 comments)
Showing 160 comments of 233
doctorpangloss
5d ago
2 replies
lots of words

okay, look at imagen 4 ultra:

https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%...

In this link, Imagen is instructed to render the verbatim prompt “the result of 4+5”, which shows that text, and not instructed, which renders “4+5=9”

Is Imagen thinking?

Let's compare to gemini 2.5 flash image (nano banana):

look carefully at the system prompt here: https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%...

Gemini is instructed to reply in images first, and if it thinks, to think using the image thinking tags. It cannot seemingly be prompted to show verbatim the result 4+5 without showing the answer 4+5=9. Of course it can show whatever exact text that you want, the question is, does it prompt rewrite (no) or do something else (yes)?

compare to ideogram, with prompt rewriting: https://ideogram.ai/g/GRuZRTY7TmilGUHnks-Mjg/0

without prompt rewriting: https://ideogram.ai/g/yKV3EwULRKOu6LDCsSvZUg/2

We can do the same exercises with Flux Kontext for editing versus Flash-2.5, if you think that editing is somehow unique in this regard.

Is prompt rewriting "thinking"? My point is, this article can't answer that question without dElViNg into the nuances of what multi-modal models really are.

gryfft
5d ago
Can you provide screenshots or links that don't require login
PunchTornado
5d ago
sorry, but I don't understand you post. those links don't work.
dostick
5d ago
3 replies
Use Google AI Studio to submit requests, and to remove watermark, open browser development tools and right click on request to “watermark_4” image and select to block it. And from next generation there will be no watermark!
billynomates
4d ago
2 replies
That sounds dangerous honestly. Watermarks should be mandatory for AI generated images.
dieortin
4d ago
This only applies to the visible watermark on the corner, which you could crop anyways. If I’m not mistaken, all images generated by Google models have an invisible watermark: https://deepmind.google/models/synthid/
dymk
4d ago
How would you enforce that when it’s actually important? Any “bad actor” could just open photoshop and remove it. Or run a delobotimized model which doesn’t watermark.
dreis_sw
4d ago
So the watermark is being added to the image on the client-side? That's pretty bad
wormpilled
4d ago
Can't believe that worked, thanks!
squigz
5d ago
4 replies
I'm getting annoyed by using "prompt engineered" as a verb. Does this mean I'm finally old and bitter?

(Do we say we software engineered something?)

vpShane
5d ago
1 reply
You're definitely old and bitter, welcome to it.

You CREATED something, and I like to think that creating things that I love and enjoy and that others can love and enjoy makes creating things worth it.

squigz
5d ago
1 reply
Don't get me wrong, I have nothing against using AI as an expression of creativity :)
malcolmxxx
5d ago
Create? So I have created all that code I'm running on my site, yes is bad I know, but thank you very much! Such creative guy I was!
officeplant
5d ago
2 replies
Not really since "prompt engineering" can be tossed in the same pile as "vibe coding." Just people coping with not developing the actual skills to produce the desired products.
bongodongobob
5d ago
1 reply
Couldn't care less. I don't need to know how to do literally everything. AI fills in my gaps and I'm a ton more productive.
squigz
5d ago
I wouldn't bother trying to convince people who are upset that others have figured out a way to use LLMs. It's not logical.
koakuma-chan
5d ago
Try getting a small model to do what you want quickly with high accuracy, high quality, etc, and using few tokens per request. You'll find out that prompt engineering is real and matters.
pavlov
5d ago
I think it’s meant to be engineering in the same sense as “social engineering”.
antegamisou
5d ago
No it means you can still discern what is BS.
miladyincontrol
5d ago
4 replies
Theres lots these models can do but I despise when people suggest they can do edits with "with only the necessary aspects changed".

No, that simply is not true. If you actually compare the before and after you can see it still regenerates all the details on the "unchanged" aspects. Texture, lighting, sharpness, even scale its all different even if varyingly similar to the original.

Sure they're cute for casual edits but it really pains me people suggesting these things are suitable replacements for actual photo editing. Especially when it comes to people, or details outside their training data theres a lot of nuance that can be lost as it regenerates them no matter how you prompt things.

Even if you

miohtama
5d ago
1 reply
Could you just mask out the area you wish to change in more advanced tools, or is there something in the model itself which would prevent this?
lunarboy
5d ago
That's probably where things are headed and there are already products trying this (even photoshop already). Just like how code gen AI tools don't replace the entire file on every prompt iteration.
StevenWaterman
5d ago
That is true for gpt-image-1 but not nano-banana. They can do masked image changes
BoredPositron
5d ago
Nano banana has a really low spatial scaling and doesn't affect details like other models.
minimaxir
5d ago
Nano Banana is different and much better at edits without changing texture/lighting/sharpness/color balance, and I am someone that is extremely picky about it. That's why I add the note that Gemini 2.5 Flash is aware of segmentation masks, and that's my hunch why that's the case.
mkagenius
5d ago
1 reply
> Nano Banana is still bad at rendering text perfectly/without typos as most image generation models.

I figured that if you write the text in Google docs and share the screenshot with banana it will not make any spelling mistake.

So, use something like "can you write my name on this Wimbledon trophy, both images are attached. Use them" will work.

minimaxir
5d ago
1 reply
Google's example documentation for Nano Banana does demo that pipeline: https://ai.google.dev/gemini-api/docs/image-generation#pytho...

That's on my list of blog-post-worthy things to test, namely text rendering to image in Python directly and passing both input images to the model for compositing.

mkagenius
4d ago
Yeah, close.

But it is still generating it with a prompt

> Logo: "A simple, modern logo with the letters 'G' and 'A' in a white circle.

My idea was do to it manually so that there is no probabilities involved.

Though your idea of using python is same.

ml-anon
5d ago
7 replies
"prompt engineered"...i.e. by typing in what you want to see.
harpiaharpyja
5d ago
1 reply
Not all models can actually do that if your prompt is particular
pksebben
5d ago
3 replies
Most designers can't, either. Defining a spec is a skill.

It's actually fairly difficult to put to words any specific enough vision such that it becomes understandable outside of your own head. This goes for pretty much anything, too.

Razengan
5d ago
1 reply
Yep, knowing how and what to ask is a skill.

For anything, even back in the "classical" search days.

pksebben
5d ago
at least then, we had hard overrides that were actually hard.

"This got searched verbatim, every time"

W*ldcards were handy

and so on...

Now, you get a 'system prompt' which is a vague promise that no really this bit of text is special you can totally trust us (which inevitably dies, crushed under the weight of an extended context window).

Unfortunately(?), I think this bug/feature has gotta be there. It's the price for the enormous flexibility. Frankly, I'd not be mad if we had less control - my guess is that in not too many years we're going to look back on RLHF and grimace at our draconian methods. Yeah, if you're only trying to build a "get the thing I intend done" machine I guess it's useful, but I think the real power in these models is in their propensity to expose you to new ideas and provide a tireless foil for all the half-baked concepts that would otherwise not get room to grow.

deathanatos
5d ago
2 replies
… sure … but also no. For example, say I have an image. 3 people in it; there is a speech bubble above the person on the right that reads "I'A'T AY RO HERT YOU THE SAP!"¹

I give it,

  Reposition the text bubble to be coming from the middle character.

  DO NOT modify the poses or features of the actual characters. 
Now sure, specs are hard. Gemini removed the text bubble entirely. Whatever, let's just try again:

  Place a speech bubble on the image. The "tail" of the bubble should make it appear that the middle (red-headed) girl is talking. The speech bubble should read "Hide the vodka." Use a Comic Sans like font. DO NOT place the bubble on the right.

  DO NOT modify the characters in the image.
There's only one red-head in the image; she's the middle character. We get a speech bubble, correctly positioned, but with a sans-serif, Arial-ish font, not Comic Sans. It reads "Hide the vokda" (sic). The facial expression of the middle character has changed.

Yes, specs are hard. Defining a spec is hard. But Gemini struggles to follow the specification given. Whole sessions are like this, and absolute struggle to get basic directions followed.

You can even see here that I & the author have started to learn the SHOUT AT IT rule. I suppose I should try more bulleted lists. Someone might learn, through experimentation "okay, the AI has these hidden idiosyncrasies that I can abuse to get what I want" but … that's not a good thing, that's just an undocumented API with a terrible UX.

(¹because that is what the AI on a previous step generated. No, that's not what was asked for. I am astounded TFA generated an NYT logo for this reason.)

minimaxir
5d ago
The NYT logo being rendered well makes sense because it's a logo, not a textual concept.
pksebben
4d ago
You're right, of course. These models have deficiencies in their understanding related to the sophistication of the text encoder and it's relationship to the underlying tokenizer.

Which is exactly why the current discourse is about 'who does it best' (IMO, the flux series is top dog here. No one else currently strikes the proper balance between following style / composition / text rendering quite as well). That said, even flux is pretty tricky to prompt - it's really, really easy to step on your own toes here - for example, by giving conflicting(ish) prompts "The scene is shot from a high angle. We see the bottom of a passenger jet".

Talking to designers has the same problem. "I want a nice, clean logo of a distressed dog head. It should be sharp with a gritty feel". For the person defining the spec, they actually do have a vision that fits each criteria in some way, but it's unclear which parts apply to what.

simonw
5d ago
1 reply
... and then iterating on that prompt many times, based on your accumulated knowledge of how best to prompt that particular model.
minimaxir
5d ago
1 reply
Case in point, the final image in this post (the IP bonanza) took 28 iterations of the prompt text to get something maximally interesting, and why that one is very particular about the constraints it invokes, such as specifying "distinct" characters and specifying they are present from "left to right" because the model kept exploiting that ambiguity.
chankstein38
5d ago
1 reply
Hey! The author, thank you for this post! QQ, any idea roughly how much this experimentation cost you? I'm having trouble processing their image generation pricing I may just not be finding the right table. I'm just trying to understand if I do like 50 iterations at the quality in the post, how much is that going to cost me?
minimaxir
5d ago
All generations in the post are $0.04/image (Nano Banana doesn't have a way to increase the resolution, yet), so you can do the math and assume that you can generated about 24 images per dollar: unlike other models, Nano Banana does charge for input tokens but it's neligible.

Discounting the testing around the character JSON which became extremely expensive due to extreme iteration/my own stupidity, I'd wager it took about $5 total including iteration.

w_for_wumbo
5d ago
2 replies
Yes, that is a serious skill. How many of the woes that we see is because people don't know what they want or are unable to describe it in such a way that others understand it. I believe prompt engineer to properly convey how complex communication can be, when interacting with a multitude of perspectives, world views, assumptions, presumptions etc. I believe it works well to counter the over-confidence that people have, from not paying attention to what gaps exist between what is said and what is meant.
CobrastanJorji
5d ago
1 reply
Yes, obviously a role involving complex communication while interacting with a multitude of perspectives, world views, assumptions, presumptions, etc needs to be called "engineer."

That is why I always call technical writers "documentation engineers," why I call diplomats "international engineers," why I call managers "team engineers," and why I call historians "hindsight engineers."

w_for_wumbo
5d ago
3 replies
I believe you're joking here, but I do think it'd be useful to have some engineering background in each of these domains. The number of miscommunications that happen in any domain, due to oversight, presumptions and assumptions is vast. At the very least the terminology will shape how we engage with it, so having an aspirational title like prompt engineer, may influence the level of rigor we apply to it.
croon
5d ago
I think what you're describing is more commonly included under epistemology under philosophy, and I agree that it would be a useful background in each of those domains, but for some reason in the last few decades we have downgraded the humanities as less useful.

So Prompt Philosopher/Communicator?

ml-anon
4d ago
it’s really unclear whether this is satire.
drw85
4d ago
I don't think that's the right direction to go in.

Despite needing much knowledge of how a planes inner workings function, a pilot is still a pilot and not an aircraft engineer.

Just because you know how human psychology works when it comes to making purchase decision and you are good at applying that to sell things, you're not a sales engineer.

Giving something a fake name, to make it seem more complicated or aspirational than it actually is makes you a bullshit engineer in my opinion.

thisOtterBeGood
5d ago
It IS a skill. And most often it is disregarded by those who did not yet conquer it ...
mensetmanusman
5d ago
We understand now that we interface with LLMs using natural and unnatural language as the user interface.

This is a very different fuzzy interface compared to programming languages.

There will be techniques better or worse at interfacing.

This is what the term prompt engineering is alluding to since we don’t have the full suite of language to describe this yet.

yieldcrv
5d ago
right? 15 months ago in image models you used to have to designate rendering specifications, and know the art of negative prompting

now you can really use natural language and people want to debate you about how poor they are at articulating a shared concepts, amazing

it's like the people are regressing and the AI is improving

jazzyjackson
5d ago
Used to be called Google Fu
darepublic
5d ago
"amenable to highly specific and granular instruction"
sebzim4500
5d ago
1 reply
It's really cool how good of a job it did rendering a page given its HTML code. I was not expecting it to do nearly as well.
kridsdale1
5d ago
Same. This must have training from sites that show html next to screenshots of the pages.
leviathant
5d ago
1 reply
I was kind of surprised by this line:

>Nano Banana is terrible at style transfer even with prompt engineering shenanigans

My context: I'm kind of fixated on visualizing my neighborhood as it would have appeared in the 18th century. I've been doing it in Sketchup, and then in Twinmotion, but neither of those produce "photorealistic" images... Twinmotion can get pretty close with a lot of work, but that's easier with modern architecture than it is with the more hand-made, brick-by-brick structures I'm modeling out.

As different AI image generators have emerged, I've tried them all in an effort to add the proverbial rough edges to snapshots of the models I've created, and it was not until Nano Banana that I ever saw anything even remotely workable.

Nano Banana manages to maintain the geometry of the scene, while applying new styles to it. Sometimes I do this with my Twinmotion renders, but what's really been cool to see is how well it takes a drawing, or engraving, or watercolor - and with as simple a prompt as "make this into a photo" it generates phenomenal results.

Similarly to the Paladin/Starbucks/Pirate example in the link though, I find that sometimes I need to misdirect a little bit, because if I'm peppering the prompt with details about the 18th century, I sometimes get a painterly image back. Instead, I'll tell it I want it to look like a photograph of a well preserved historic neighborhood, or a scene from a period film set in the 18th century.

As fantastic as the results can be, I'm not abandoning my manual modeling of these buildings and scenes. However, Nano Banana's interpretation of contemporary illustrations has helped me reshape how I think about some of the assumptions I made in my own models.

echelon
5d ago
1 reply
You can't take a highly artistic image and supply it as a style reference. Nano Banana can't generalize to anything not in its training.
leviathant
5d ago
Fair enough! I suppose I've avoided that kind of "style transfer" for a variety of reasons, it hadn't even occurred to me that people were still interested in that. And I don't say that to open up debate on the topic, just explaining away my own ignorance/misinterpretation. Thanks
simonw
5d ago
4 replies
I like the Python library that accompanies this: https://github.com/minimaxir/gemimg

I added a CLI to it (using Gemini CLI) and submitted a PR, you can run that like so:

  GEMINI_API_KEY="..." \
  uv run --with https://github.com/minimaxir/gemimg/archive/d6b9d5bbefa1e2ffc3b09086bc0a3ad70ca4ef22.zip \
    python -m gemimg "a racoon holding a hand written sign that says I love trash"
Result in this comment: https://github.com/minimaxir/gemimg/pull/7#issuecomment-3529...
echelon
5d ago
3 replies
The author went to great lengths about open source early on. I wonder if they'll cover the QwenEdit ecosystem.

I'm exceptionally excited about Chinese editing models. They're getting closer and closer to NanoBanana in terms of robustness, and they're open source. This means you can supply masks and kernels and do advanced image operations, integrate them into visual UIs, etc.

You can even fine tune them and create LoRAs that will do the style transferring tasks that Nano Banana falls flat on.

I don't like how closed the frontier US models are, and I hope the Chinese kick our asses.

That said, I love how easy it'll be to distill Nano Banana into a new model. You can pluck training data right out of it: ((any image, any instruction) -> completion) tuples.

minimaxir
5d ago
2 replies
I've been keeping an eye on Qwen-Edit/Wan 2.2 shenanigans and they are interesting: however actually running those types of models is too cumbersome and in the end unclear if it's actually worth it over the $0.04/image for Nano Banana.
CamperBob2
5d ago
1 reply
I was skeptical about the notion of running similar models locally as well, but the person who did this (https://old.reddit.com/r/StableDiffusion/comments/1osi1q0/wa... ) swears that they generated it locally, just letting a single 5090 crunch away for a week.

If that's true, it seems worth getting past the 'cumbersome' aspects. This tech may not put Hollywood out of business, but it's clear that the process of filmmaking won't be recognizable in 10 years if amateurs can really do this in their basements today.

rcarr
5d ago
Neural Viz has been putting out some extremely high quality content recently, these seem to be the closest I've seen to approaching Hollywood level:

https://www.youtube.com/watch?v=5bYA2Rv2CQ8

https://www.youtube.com/watch?v=rfTnW8pl3DE

braebo
5d ago
1 reply
Takes a couple mouse clicks in ComfyUI
echelon
5d ago
1 reply
On that subject - ComfyUI is not the future of image gen. It's an experimental rope bridge.

Adobe's conference last week points to the future of image gen. Visual tools where you mold images like clay. Hands on.

Comfy appeals to the 0.01% that like toolkits like TouchDesigner, Nannou, and ShaderToy.

mh-
5d ago
1 reply
Got a link handy to a video of what you're referring to from Adobe's conference? Gave it a quick google but there's a lot of content. Thanks!
echelon
4d ago
1 reply
They demoed a ton of new features in various stages of completion. Some of them are already production-grade and are being launched soon.

https://www.youtube.com/watch?v=YqAAFX1XXY8 - dynamic 3D scene relighting is insane, check out the 3:45 mark.

https://www.youtube.com/watch?v=BLxFn_BFB5c - molding photos like clay in 3D is absolutely wild at the 3:58 mark.

I don't have links to everything. They presented a deluge of really smart editing tools and gave their vision for the future of media creation.

Tangible, moldable, visual, fast, and easy.

mh-
4d ago
Thank you! Will take a look. That's really exciting.
msp26
5d ago
1 reply
> I don't like how closed the frontier US models are, and I hope the Chinese kick our asses.

For imagegen, agreed. But for textgen, Kimi K2 thinking is by far the best chat model at the moment from my experience so far. Not even "one of the best", the best.

It has frontier level capability and the model was made very tastefully: it's significantly less sycophantic and more willing to disagree in a productive, reasonable way rather than immediately shutting you out. It's also way more funny at shitposting.

I'll keep using Claude a lot for multimodality and artifacts but much of my usage has shifted to K2. Claude's sycophancy is particular is tiresome. I don't use ChatGPT/Gemini because they hide the raw thinking tokens, which is really cringe.

astrange
5d ago
1 reply
Claude Sonnet 4.5 doesn't even feel sycophantic (in the 4o) way, it feels like it has BPD. It switches from desperately agreeing with you to moralizing lectures and then has a breakdown if you point out it's wrong about anything.

Also, yesterday I asked it a question and after the answer it complained about its poorly written system prompt to me.

They're really torturing their poor models over there.

dontlikeyoueith
5d ago
It rubs the data on its skin or else it gets the prompt again!
vunderba
5d ago
1 reply
The Qwen-Edit images from my GenAI Image Editing Showdown site were all generated from a ComfyUI workflow on my machine - it's shockingly good for an open-weight model. It was also the only model that scored a passing grade on the Van Halen M&M test (even compared against Nanobanana)

https://genai-showdown.specr.net/image-editing

irthomasthomas
4d ago
Ha I created a Van Halen M&M test for text prompts. I would include an instruction demanding that the response contain <yellow_m&m> and <red_m&m> but never <brown_m&m>. Then I would fail any llm that did not include any m&ms, or if they wrote anything about the <brown_m&m> in the final output.
ctippett
5d ago
1 reply
Any reason for not also adding a project.scripts entry for pyproject.toml? That way the CLI (great idea btw) could be installed as a tool by uv.
simonw
5d ago
I decided to avoid that purely to keep changes made to the package as minima as possible - adding a project.scripts means installing it adds a new command alias. My approach changes nothing other than making "python -m gemimg" do something useful.

I agree that a project.scripts would be good but that's a decision for the maintainer to take on separately!

sorcercode
5d ago
2 replies
@simonw: slight tangent but super curious how you managed to generate the preview of that gemini-cli terminal session gist - https://gistpreview.github.io/?17290c1024b0ef7df06e9faa4cb37...

is this just a manual copy/paste into a gist with some html css styling; or do you have a custom tool à la amp-code that does this more easily?

simonw
5d ago
1 reply
I used this tool: https://tools.simonwillison.net/terminal-to-html

I made a video about building that here: https://simonwillison.net/2025/Oct/23/claude-code-for-web-vi...

It works much better with Claude Code and Codex CLI because they don't mess around with scrolling in the same way as Gemini CLI does.

sorcercode
5d ago
very cool. frequently, i want to share my prompt + session output; this will make that super easy! thanks again for sharing!
ilyakaminsky
5d ago
I use Gemini CLI on a daily basis. It used to crash often and I'd lose the chat history. I found this tool called ai-cli-log [1] and it does something similar out of the box. I don't run Gemini CLI without it.

[1] https://github.com/alingse/ai-cli-log

minimaxir
5d ago
I just merged the PR and pushed 0.3.1 to PyPI. I also added README documentation and allowed for a `gemimg` entrypoint to the CLI via project.scripts as noted elsewhere in the thread.
peetle
5d ago
2 replies
In my own experience, nano banana still has the tendency to:

- make massive, seemingly random edits to images - adjust image scale - make very fine grained but pervasive detail changes obvious in an image diff

For instance, I have found that nano-banana will sporadically add a (convincing) fireplace to a room or new garage behind a house. This happens even with explicit "ALL CAPS" instructions not to do so. This happens sporadically, even when the temperature is set to zero, and makes it impossible to build a reliable app.

Has anyone had a better experience?

andblac
5d ago
1 reply
The "ALL CAPS" part of your comment got me thinking. I imagine most llms understand subtle meanings of upper case text use depending on context. But, as I understand it, ALL CAPS text will tokenize differently than lower case text. Is that right? In that case, won't the upper case be harder to understand and follow for most models since it's less common in datasets?
minimaxir
5d ago
1 reply
There's more than enough ALL CAPS text in the corpus of the entire internet, and enough semantic context associated with it for it to be intended to be in the imperative voice.
miohtama
5d ago
1 reply
Shouldn't all caps normalised to tokens like low caps? There are no separate tokens for all caps and low caps in Llama, or at least not in the past.
minimaxir
5d ago
Looking at the tokenizer for the older Llama 2 model, the tokenizer has capital letters in it: https://huggingface.co/meta-llama/Llama-2-7b-hf
symisc_devel
5d ago
I work on the PixLab prompt based photo editor (https://editor.pixlab.io), and it follows exactly what you type with explicit CAPS.
mFixman
5d ago
5 replies
The author overlooked an interesting error in the second skull pancake image: the strawberry is on the right eye socket (to the left of the image), and the blackberry is on the left eye socket (to the right of the image)!

This looks like it's caused by 99% of the relative directions in image descriptions describing them from the looker's point of view, and that 99% of the ones that aren't it they refer to a human and not to a skull-shaped pancake.

martin-adams
5d ago
1 reply
I picked up on that also. I feel that a lot of humans would also get confused about whether you mean the eye on the left, or the subject's left eye.
Closi
5d ago
3 replies
To be honest this is the sort of thing Nano Bannana is weak at in my experience. It's absolutely amazing - but doesn't understand left/right/up/down/shrink this/move this/rotate this etc.

See below to demonstrate this weakness with the same prompts as the article see the link below, which demonstrates that it is a model weakness and not just a language ambiguity:

https://gemini.google.com/share/a024d11786fc

ffsm8
5d ago
Mmh, ime you need to discard the session/rewrite the failing prompt instead of continuing and correcting on failures. Once errors occur you've basically introduced a poison pill which will continuously make things to haywire. Spelling out what it did wrong is the most destructive thing you can do - at least in my experience
basch
5d ago
to the point where you can say, raise the left arm and then raise the right arm and get the same image with the same arm raised.
astrange
5d ago
Almost no image/video models can do "upside-down" either.
jonas21
5d ago
1 reply
I am a human, and I would have done the same thing as Nano Banana. If the user had wanted a strawberry in the skull's left eye, they should've said, "Put a strawberry in its left eye socket."
kjeksfjes
5d ago
1 reply
Exactly what I was thinking too. I'm a designer, and I'm used to receiving feedback and instructions. "The left eye socket" would to me refer to what I currently see in front of me, while "its left eye socket" instantly shift the perspective from me to the subject.
bear141
5d ago
1 reply
I find this interesting. I've always described things from the users point of view. Like the left side of a car, regardless of who is looking at it from what direction, is the driver side. To me, this would include a body.
Jolter
4d ago
Spend some time at sea, learn why a ship has no right or left side.
minimaxir
5d ago
1 reply
I admit I missed this, which is particularly embarrassing because I point out this exact problem with the character JSON later in the post.

For some offline character JSON prompts I ended up adding an additional "any mentions of left and right are from the character's perspective, NOT the camera's perspective" to the prompt, which did seem to improve success.

frumiousirc
4d ago
1 reply
The lack of proper indentation (which you noted) in the Python fib() examples was even more apparent. The fact that both AIs you tested failed in the same way is interesting. I've not played with image generation, is this type of failure endemic?
minimaxir
4d ago
My hunch in that case is that the composition of the image implied left-justified text which overwrote the indentation rule.
zulban
4d ago
Extroverts tend to expect directions from the perspective of the skull. Introverts tend to expect their own perspective for directions. It's a psychology thing, not an error.
sib
5d ago
Came to make exactly the same comment. It was funny that the author specifically said that Nano Banana got all five edit prompts correct, rather than noting this discrepancy, which could be argued either way (although I think the "right eye" of a skull should be interpreted with respect to the skull's POV.)
satvikpendem
5d ago
1 reply
For images of people generated from scratch, Nano Banana always adds a background blur, it can't seem to create more realistic or candid images such as those taken via a point and shoot or smartphone, has anyone solved this sort of issue? It seems to work alright if you give it an existing image to edit however. I saw some other threads online about it but I didn't see anyone come up with solutions.
kridsdale1
5d ago
2 replies
Maybe try including “f/16” or “f/22” as those are likely to be in the training set for long depth of field photos.
astrange
5d ago
1 reply
Those are rarely in the captions for the image. They'd have to extract the EXIF for photos and include it in recaptioning. Which they should be doing, but I doubt they thought about it.
efskap
5d ago
Photo sites like Flickr do extract EXIF data and show it next to the image, but who knows if the scraping picked them up.

Looks like specific f-stops don't actually make a difference for stable diffusion at least: https://old.reddit.com/r/StableDiffusion/comments/1adgcf3/co...

satvikpendem
5d ago
I tried that but they don't seem to make much difference for whatever reason, you still can't get a crisp shot such as this [0] where the foreground and background details are all preserved (linked shot was taken with an iPhone which doesn't seem to do shallow depth of field unless you use their portrait mode).

[0] https://www.lux.camera/content/images/size/w1600/2024/09/IMG...

Genego
5d ago
1 reply
I have been generating a few dozen images per day for storyboarding purposes. The more I try to perfect it, the easier it becomes to control these outputs and even keep the entire visual story as well as their characters consistent over a few dozen different scenes; while even controlling the time of day throughout the story. I am currently working with 7 layers prompts to control for environment, camera, subject, composition, light, colors and overall quality (it might be overkill, but it’s also experimenting).

I also created a small editing suite for myself where I can draw bounding boxes on images when they aren’t perfect, and have them fixed. Either just with a prompt or feeding them to Claude as image and then having it write the prompt to fix the issue for me (as a workflow on the api). It’s been quite a lot of fun to figure out what works. I am incredibly impressed by where this is all going.

Once you do have good storyboards. You can easily do start-to-end GenAI video generation (hopping from scene to scene) and bring them to life and build your own small visual animated universes.

taylorhughes
5d ago
3 replies
We use nano banana extensively to build video storyboards, which we then turn into full motion video with a combination of img2vid models. It sounds like we're doing similar things, trying to keep images/characters/setting/style consistent across ~dozens of images (~minutes of video). You might like the product depending on what you're doing with the outputs! https://hypernatural.ai
roywiggins
5d ago
5 replies
Your "Dracula" character is possibly the least vampiric Dracula I've ever seen tbh
observationist
5d ago
1 reply
I agree. Bruhcula? Something like that. He's a vampire, but also models and does stunts for Baywatch - too much color and vitality. Joan of Arc is way more pale.

Maybe a little mode collapse away from pale ugliness, not quite getting to the hints of unnatural and corpse-like features of a vampire - interesting what the limitations are. You'd probably have to spend quite a lot of time zeroing in, but Google's image models are supposed to have allowed smooth traversal of those feature spaces generally.

ineedasername
5d ago
Flux Kontext does pretty well also, for modifications. Though I’ve otherwise found the Flux models somewhat stubbornly locked into certain compositions at times that requires a control net to break where other models have been more pliable, though with other trade offs.
Conscat
5d ago
3 replies
That looks exactly like the photos on a Spirit Halloween costume.
HaZeust
5d ago
People pay consulting firms good money to be told their ideal customer so plainly!
Teelo
5d ago
I'm in tears. Clicked to check out Dracula and sure enough it's a spot on spirit halloween dollar tree Dracula.
flir
5d ago
The Sherlock Holmes is heavily influenced by Cucumber Patch.
beepbooptheory
5d ago
1 reply
Having a Statue of Liberty character available is for some reason so funny to me.
somenameforme
5d ago
1 reply
Makes a lot of sense for some short kid's skit teaching them about the branches of government or whatever. One could also get more creative with the Statue of Liberty and Joan of Arc.
happymellon
5d ago
> Create me a video of Joan of Arc fighting the Statue of Liberty in the style of Shadow of the Colossus.

I see where you are coming from...

qmmmur
5d ago
4 replies
If anything, the ubiquity of AI has just revealed how many people have 0 taste. It also highlights the important role that these human-centred jobs were doing to keep these people from contributing to the surface of any artistic endeavour in "culture".
prox
5d ago
2 replies
There is a reason people (used to) study art and train for years. Easy art is often no art because you need that effort and investment, and learning artistic context, to understand and appreciate.

Which is not to say don’t be creative, I applaud all creativity, but also to be very critical of what you are doing.

bestthrowaway
5d ago
1 reply
I've been playing around with T2I/I2V generation to make some NSFW stuff of video-game characters using ComfyUI.

It's pretty easy to get something decent. It's really hard to get something good. I share my creations with some close friends and some are like "that's hot!" but are too fixated on breasts to realize that the lighting or shadow is off. Other friends do call out the bad lighting.

You may be like "it's just porn, why care about consistent lighting?" and the answer for me is that I'm doing all this to learn how everything works. How to fine tune weights, prompts, using IP Adapter, etc. Once I have a firm understanding of this stuff, then I will probably be able to make stuff that's actually useful to society. Unlike that coke commercial.

sam345
4d ago
1 reply
You can do better than porn which isn't very useful to society.
CamperBob2
4d ago
1 reply
As opposed to what you're doing at the moment, living your best life here on social media.
abustamam
4d ago
I think it's a fair comment though. Porn isn't really useful to society (one could argue that it's actually detrimental to society but that's a separate topic).

But what I understood from parent comment is that they just do it for fun, not necessarily to be a boon to society. And then if it comes with new skills that actually can benefit society, then that's a win.

Granted, the commenter COULD play around with SFW stuff but if they're just doing it for fun then that's still not benefiting society either, so either way it's a wash. We all have fun in our own ways.

abustamam
5d ago
2 replies
Reminds me of that AI coke commercial. I personally didn't notice how shitty it was until I read about it online. (I actually didn't even see the commercial until I read about it online).

But it's impressive that this billion dollar company didn't have one single person say "hey it's shitty, make it better."

scotty79
5d ago
1 reply
Everything's shitty in its own way. Modern (or even golden age era) movies, with top production values are equivalent of Egyptian wall paintings. They have specific style, specific way to show things. Over the years movie artists just figured out in what specific way the movies should be shitty and the audiences were taught that as a canon.

AI is shitty in its own new unique ways. And people don't like new. They want they old, polished shittiness they are used to.

abustamam
5d ago
While I agree that all art is kinda shitty in its own way (IMDB has sections dedicated to breaks in continuity and stuff like that), experienced filmmakers would be good at hiding the shittiness (maybe with a really clever action sequence or something).

It's only a matter of time before we get experienced AI filmmakers. I think we already have them, actually. It's clear that Coke does not employ them though.

astrange
4d ago
It's an intentional new-media ad, so I think they're embracing the flaws rather than trying to hide them.

Also, since it's new media, nobody knows how to budget time or money to fix the flaws. It could be infinitely expensive.

scotty79
5d ago
3 replies
So in the end it turns out that the art was never so much about creativity as about gatekeeping. And "everyone can make art" was just a fake facade, because not really.
DrewADesign
4d ago
2 replies
Of course everyone can make art. Toddlers make art. The hard truth is that getting good technical art skills, be they visual, musical, literary, or anything else is like getting stronger— many people that want to do it are too lazy or undisciplined to do the daily work required to do it. You might be starting too late (Maybe post-middle-age) or don’t have the time to become an exceptional artist, but most art that people like wasn’t made by exceptional artists; there are a lot more strong people than professional athletes or Olympians. You don’t even need a gym membership or weights, and there’s limitless free information about how to do it online. Nobody is stopping anyone from doing it. Just like many, if not most gym memberships are paid for but unused after the first, like, month, many people try drawing for a little while, get frustrated that it’s so difficult to learn, and then give up. The gatekeeping argument is an asinine excuse people make to blame other people for their own lack of discipline.
scotty79
4d ago
2 replies
> Of course everyone can make art. Toddlers make art.

That's my entire point. Artists were fine with everybody making "art" as long as everybody except them (with their hard fought skill and dedication) achieved toddler level of output quality. As soon as everybody could truly get even close to the level of actual art, not toddler art, suddenly there's a horrible problem with all the amateur artists using the tools that are available to them to make their "toddler" art.

cvwright
4d ago
1 reply
Well but then they spent 100 years telling us that the toddler stuff was the good stuff. Just as long as it was created by a “real artist”.
DrewADesign
4d ago
Making value statements about art is pretty much exclusively the realm of art critics and art historians. They're no more representative of artists than general historians are representative of politicians and soldiers.
DrewADesign
4d ago
1 reply
Most artists don’t give a flying fuck about what you do on your own. Seriously! They really don’t. What they care about is having their work ripped off so for-profit companies can kill the market for their hard-won skills with munged-up derivatives.

Folks in tech generally have very limited exposure to the art world — fan art communities online, Reddit subs, YouTubers, etc. It’s more representative of internet culture than the art world— no more representative of artists than X politics is representative of voters. People have real grievances here and you are not a victim of the world’s artists. Most artists also don’t care about online art communities or what you think about them. Not even a little bit.

scotty79
4d ago
1 reply
> you are not a victim of the world’s artists

I will be if they manage to slow down development of AI even by a smidgen.

> Most artists also don’t care about online art communities or what you think about them. Not even a little bit.

Fully agree. They care about whether there's going to be anyone willing to buy their stuff from them. And not-toddler art is a real competition for them. So they are super against everybody making it.

DrewADesign
4d ago
Well drat, you’ve exposed all of us, from art directors to VFX artists to fine art painters to singer-songwriters to graphic designers to game designers to symphony cellists as a monolithic glob of petty, transactional rakes. Fortunately, everyone is an artist now, so you can make your own output to feed to models and leave our work out of it entirely! It clearly has no value so nobody should be mad about going without it. Problem solved!
KineticLensman
4d ago
2 replies
Classic gatekeeping quote: "Everyone has a book in them, but in most cases that's where it should stay"
DrewADesign
4d ago
Hitchens was, first and foremost, a critic. Most of the so-called gatekeeping that people accuse artists of is actually born from art criticism-- a completely different group of people rarely as popular among artists as they are among people that like to feel cool about looking at art.
CamperBob2
4d ago
I prefer Stephen King's version: something like "Everybody has four crappy books in them. Get them done and out of the way as soon as possible."
Cthulhu_
5d ago
Everyone can make art, but whether it's considered good is another matter.
vasco
5d ago
Everyone can, don't worry, art people are snobs even with their own. Now they can just complain about the plebes doing it wrong ALSO.
friendzis
5d ago
The ubiquity of AI has just revealed that there are tons of grifters willing to release the sloppiest thing ever if they thought it could make some money. They would refrain from that if they had at least a glimmer of taste.
Libidinalecon
4d ago
It is really no different than music. Millions of people play guitar but most are not worth listening to or deserving of an audience.

Imagine if you gave everyone a free guitar and people just started posting their electric guitar noodlings on social media after playing for 5 minutes.

It is not a judgement on the guitar. If anything it is a judgement on social media and the stupidity of the social media user who get worked up about someone creating "slop" after playing guitar for 5 minutes.

What did you expect them to sound like, Steve Vai?

tincholio
5d ago
He looks like Dracula on LinkedIn
nolroz
5d ago
1 reply
The website lets you type in an entire prompt, then tells you to login, then dumps your prompt and leaves you with nothing. Lame.
scotty79
5d ago
1 reply
I noticed ChatGPT and others do exactly the same once you run out of anonymous usage. Insanely annoying.
jdhjn6hhh
5d ago
4 replies
Hn does that too. You've typed out a long response, oh sorry you're posting too fast. Please slow down.

It's intentionally hostile and inconsiderate.

insane_dreamer
4d ago
At least on HN you can Go Back in your browser and restore the page before submission with your post in the box.

But it would be _much_ better if when you hit reply, it gave you a message that you're "posting too fast" before you spend the time to write it up.

idiotsecant
4d ago
That's because you're in the bad user doghouse.
ta12653421
4d ago
Rule #1 when typing longer texts into webforms/textboxes: ALWAYS do a CTRL+C before you click submit.
setr
5d ago
You don’t lose the message though, so it’s infinitely less annoying
Genego
5d ago
Yes we are definitely doing the same! For now I’m just familiarizing myself in this space technically and conceptually. https://edwin.genego.io/blog
layer8
5d ago
> It’s one of the best results I’ve seen for this particular test, and it’s one that doesn’t have obvious signs of “AI slop” aside from the ridiculous premise.

It’s pretty good, but one conspicuous thing is that most of the blueberries are pointing upwards.

BoredPositron
5d ago
The kicker for nano banana is not prompt adherence which is a really nice to have but the fact that it's either working on pixel space or with a really low spatial scaling. It's the only model that doesn't kill your details because of vae encode/decode.
pfortuny
5d ago
Well, I just asked it for a 13-sided irregular polygon (is it that hard?)…

https://imgur.com/a/llN7V0W

sejje
5d ago
>> "The image style is definitely closer to Vanity Fair (the photographer is reflected in his breastplate!)"

I didn't expect that. I would have definitely counted that as a "probably real" tally mark if grading an image.

ainiriand
5d ago
The blueberry and strawberry are not actually where they prompted.
roywiggins
5d ago
Another thing it can't do is remove reflections in windows, it's nearly a no-op.
comex
5d ago
I tried asking for a shot from a live-action remake of My Neighbor Totoro. This is a task I’ve been curious about for a while. Like Sonic, Totoro is the kind of stylized cartoon character that can’t be rendered photorealistically without a great deal of subjective interpretation, which (like in Sonic’s case) is famously easy to get wrong even for humans. Unlike Sonic, Totoro hasn’t had an actual live-action remake, so the model would have to come up with a design itself. I was wondering what it might produce – something good? something horrifying? Unfortunately, neither; it just produced a digital-art style image, despite being asked for a photorealistic one, and kept doing so even when I copied some of the keyword-stuffing from the post. At least it tried. I can’t test this with ChatGPT because it trips the copyright filter.
jdc0589
5d ago
I don't feel like I should search for "nano banana" on my work laptop
insane_dreamer
5d ago
I haven't paid much attention to image generation models (not my area of interest), but these examples are shockingly good.

73 more comments available on Hacker News

ID: 45917875Type: storyLast synced: 11/16/2025, 9:42:57 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.