[HN Gopher] Nano Banana Pro
       ___________________________________________________________________
        
       Nano Banana Pro
        
       Author : meetpateltech
       Score  : 1229 points
       Date   : 2025-11-20 15:04 UTC (1 days ago)
        
 (HTM) web link (blog.google)
 (TXT) w3m dump (blog.google)
        
       | meetpateltech wrote:
       | Developer Blog:
       | https://blog.google/technology/developers/gemini-3-pro-image...
       | 
       | DeepMind Page: https://deepmind.google/models/gemini-image/pro/
       | 
       | Model Card: https://storage.googleapis.com/deepmind-media/Model-
       | Cards/Ge...
       | 
       | SynthID in Gemini: https://blog.google/technology/ai/ai-image-
       | verification-gemi...
        
       | varbhat wrote:
       | Can anyone please explain me the invisible watermarking mentioned
       | in the said promo?
        
         | nickdonnelly wrote:
         | It's called Synth ID. It's a watermark that proves an image was
         | generated by AI.
         | 
         | https://deepmind.google/models/synthid/
        
           | VladVladikoff wrote:
           | Super important for Google as a search engine so they can
           | filter out and downrank AI generated results. However I
           | expect there are many models out there which don't do this,
           | that everyone could use instead. So in the end a "feature"
           | like this makes me less likely to use their model because I
           | don't know how Google will end up treating my blog post if I
           | decide to include an AI generated or AI edited image.
        
             | Filligree wrote:
             | It's required by EU regulations. Any public generator that
             | doesn't do it, is in violation of that unless it's entirely
             | inaccessible from the EU...
             | 
             | But of course there's no way to enforce it on local
             | generation.
        
               | Aloisius wrote:
               | The EU didn't define any specific method of watermarking
               | nor does it need to be tamper resistant. Even if they had
               | specified it though, it's easy to remove watermarks like
               | SynthID.
        
               | VladVladikoff wrote:
               | I have been curious about this myself. I tried a few
               | basic stenography detection type tools to look for
               | watermarks but didn't find anything. Are you aware of any
               | tools that do what you are suggesting?
        
           | airstrike wrote:
           | So whoever creates AI content needs to voluntarily adopt this
           | so that Google can sell "technology" for identifying said
           | content?
           | 
           | Not sure how that makes any sense
        
           | raincole wrote:
           | *by Google's AI.
        
             | zamadatix wrote:
             | By anybody's AI using SynthID watermarking, not just
             | Google's AI using SynthID watermarking (it looks like
             | partnership is not open to just anyone though, you have to
             | apply).
        
           | jsheard wrote:
           | In theory, at least. In practice maybe not.
           | 
           | https://i.imgur.com/WKckRmi.png
        
             | raincole wrote:
             | ?
             | 
             | Google doesn't claim that Gemini would call SynthID
             | detector at this point.
             | 
             | Edit: well they actually do. I guess it is not rolled out
             | yet.
        
               | jsheard wrote:
               | From the OP:
               | 
               | > Today, we are putting a powerful verification tool
               | directly in consumers' hands: you can now upload an image
               | into the Gemini app and simply ask if it was generated by
               | Google AI, thanks to SynthID technology. We are starting
               | with images, but will expand to audio and video soon.
               | 
               | Re-rolling a few times got it to mention trying SynthID,
               | but as a false negative, assuming it actually did the
               | check and isn't just bullshitting.
               | 
               | > No Digital Watermark Detected: I was unable to detect
               | any digital watermarks (such as Google's SynthID) that
               | would definitively label it as being generated by a
               | specific AI tool.
               | 
               | This would be a lot simpler if they just exposed the
               | detector directly, but apparently the future is coaxing
               | an LLM into doing a tool call and then second guessing
               | whether it actually ran the tool.
        
         | KolmogorovComp wrote:
         | Has anyone found out how to use Synth ID? If I want to if some
         | images are AI, how can I do?
        
       | volkk wrote:
       | SynthID seems interesting but in classic Google fashion, I
       | haven't a clue on how to use it and the only button that exists
       | is join a waitlist. Apparently it's been out since 2023? Also,
       | does SynthID work only within gemini ecosystem? If so, is this
       | the beginning of a slew of these products with no one standard
       | way? i.e "Have you run that image through tool1, tool2, tool3,
       | and tool4 before deciding this image is legit?"
       | 
       | edit: apparently people have been able to remove these watermarks
       | with a high success rate so already this feels like a DOA product
        
         | dragonwriter wrote:
         | > SynthID seems interesting but in classic Google fashion, I
         | haven't a clue on how to use it and the only button that exists
         | is join a waitlist. Apparently it's been out since 2023? Also,
         | does SynthID work only within gemini ecosystem? If so, is this
         | the beginning of a slew of these products with no one standard
         | way
         | 
         | No, its not the beginning, multiple different watermarking
         | standards, watermark checking systems, and, of course,
         | published countermeasures of various effectiveness for most of
         | them, have been around for a while.
        
         | dieortin wrote:
         | Do you have a source on people being able to remove SynthID
         | watermarks?
        
           | volkk wrote:
           | just another comment here, i happened to believe it
        
       | Razengan wrote:
       | Can Google Gemini 3 check Google Flights for live ticket prices
       | yet?
       | 
       | (The Gemini 3 post has a million comments too many to ask this
       | now)
        
         | jeffbee wrote:
         | https://gemini.google.com/share/19fed9993f06
        
           | Razengan wrote:
           | Ah thanks, might have to make a throwaway account just for
           | that.
           | 
           | Gemini 2 still goes "While I cannot check Google Flights
           | directly, I can provide you with information based on current
           | search results..." blah blah
        
       | hbn wrote:
       | I wouldn't trust any of the info in those images in the first
       | carousel if I found them in the wild. It looks like AI image slop
       | and I assume anyone who thinks those look good enough to share
       | did not fact check any of the info and just prompted "make an
       | image with a recipe for X"
        
         | matsemann wrote:
         | Yeah, the weird yellow tint, the kerning/fonts etc still
         | immediately gives it away.
         | 
         | But I wouldn't mind being easily able to make infographics
         | _like_ these, I 'd just like to supply the textual and factual
         | content myself.
        
           | kccqzy wrote:
           | I would do the same. But the reason for that is because I'm
           | terrible at drawing and digital art, so I would need some
           | help with the graphics in an infographics anyways. I don't
           | really need help with writing text or typesetting the text. I
           | feel like if I were better at creating art I would not want
           | AI involved at all.
        
       | jpadkins wrote:
       | really missed an opportunity to name it micro banana (or milli
       | banana). Personally I can't wait for mega banana next year.
        
       | fouronnes3 wrote:
       | I guess the true endgame of AI products is naming them. We still
       | have quite a way to go.
        
         | awillen wrote:
         | Honestly I give Google credit for realizing that they had
         | something that people were talking about and running with it
         | instead of just calling it gemini-image-large-with-text-pro
        
           | echelon wrote:
           | They tried calling it gemini-2.5-whatever, but social media
           | obsessed over the name "Nano Banana", which was just its
           | codename that got teased on Twitter for a few weeks prior to
           | launch.
           | 
           | After launch, Google's public branding for the product was
           | "Gemini" until Google just decided to lean in and fully adopt
           | the vastly more popular "Nano Banana" label.
           | 
           | The public named this product, not Google. Google's internal
           | codename went virally popular and outstaged the official
           | name.
           | 
           | Branding matters for distribution. When you install yourself
           | into the public consciousness with a name, you'd better use
           | the name. It's free distribution. You own human wetware
           | market share for free. You're alive in the minds of the
           | public.
           | 
           | Renaming things every human has brand recognition of, eg. HBO
           | -> Max, is stupid. It doesn't matter if the name sucks.
           | ChatGPT as a name sucks. But everyone in the world knows it.
           | 
           | This will forever be Nano Banana unless they deprecate the
           | product.
        
             | mupuff1234 wrote:
             | I doubt majority of the public knows what "nano banana" or
             | even "Gemini" means, they probably just call it "Google
             | AI".
             | 
             | And I'm willing to bet eventually Google will rename Gemini
             | to be something like Google AI or roll it back into Google
             | assistant.
        
         | timenotwasted wrote:
         | We just need a new AI for that.
        
           | riskable wrote:
           | Need a name for something? Try our new Mini Skibidi model!
        
             | gorbot wrote:
             | Also introducing the amazing 6-7 pro model
        
         | b33j0r wrote:
         | This has always been the hardest problem in computer science
         | besides "Assume a lightweight J2EE distribution..."
        
         | jedberg wrote:
         | I was at a tech conference yesterday, and I asked someone if
         | they had tried nano banana. They looked at me like I was crazy.
         | These names aren't helping! (But honestly I love it, easier to
         | remember than Gemini-2.whatever.
        
         | mlmonkey wrote:
         | _There are only 2 hard problems in computer science: cache
         | coherency, naming things and off by 1 errors..._
        
       | guzik wrote:
       | Cool, but it's still unusable for me. Somehow all my prompts are
       | violating the rules, huh?
        
         | Filligree wrote:
         | Can you give us an example?
        
           | guzik wrote:
           | 'athlete wearing a health tracker under a fitted training
           | top'
           | 
           | Failed to generate content: permission denied. Please try
           | again.
        
             | raincole wrote:
             | It's not the censorship safeguard. Permission denied means
             | you need a paid API key to use it. It's confusing, I know.
             | 
             | If you triggered the safeguard it'll give you the typical
             | "sorry, I can't..." LLM response.
        
         | mudkipdev wrote:
         | Are you asking it to recreate people?
        
           | guzik wrote:
           | No, and no nudity, no reference images. Example: 'athlete
           | wearing a health tracker under a fitted training top'
        
         | ASinclair wrote:
         | Have some examples?
        
         | gdulli wrote:
         | In 25 years we'll reminisce on the times when we could find a
         | human artist who wouldn't impose Google's or OpenAI's rules on
         | their output.
        
           | guzik wrote:
           | the open-source models will catch up, 100%
        
             | raincole wrote:
             | Open models don't seem to be catching up the LLM-based
             | image gen at this point.
             | 
             | ChatGPT's imagegen has been released for half a year but
             | there isn't anything _remotely_ similar to it in the open
             | weight realm.
        
               | recursive wrote:
               | Give it another 50 years. Or maybe 10. Or 5? But there's
               | no way it won't catch up.
        
       | eminence32 wrote:
       | > Generate better visuals with more accurate, legible text
       | directly in the image in multiple languages
       | 
       | Assuming that this new model works as advertised, it's
       | interesting to me that it took this long to get an image
       | generation model that can reliably generate text. Why is text
       | generation in images so hard?
        
         | Filligree wrote:
         | It's not necessarily harder than other aspects. However:
         | 
         | - It requires an AI that actually understands English, I.e. an
         | LLM. Older, diffusion-only models were naturally terrible at
         | that, because they weren't trained on it.
         | 
         | - It requires the AI to make no mistakes on image rendering,
         | and that's a high bar. Mistakes in image generation are so
         | common we have memes about it, and for all that hands generally
         | work fine now, the rest of the picture is full of mistakes you
         | can't tell are mistakes. Entirely impossible with text.
         | 
         | Nano Banana Pro seems to somewhat reliably produce entire
         | pictures without any mistakes at all.
        
         | tobr wrote:
         | As a complete layman, it seems obvious that it should be hard?
         | Like, text is a type of graphic that needs to be coherent both
         | in its detail and its large structure, and there's a very small
         | amount of variation that we don't immediately notice as strange
         | or flat out incorrect. That's not true of most types of
         | imagery.
        
         | DesertVarnish wrote:
         | Largely but not entirely a data problem; specifically poor
         | captioning. High quality captioning makes _such_ a big
         | difference.
        
       | saretup wrote:
       | Interesting they didn't post any benchmark results -
       | lmarena/artificial analysis etc. I would've thought they'd be
       | testing it behind the scenes the same way they did with Gemini 3.
        
       | maliker wrote:
       | I wonder how hard it is to remove that SynthID watermark...
       | 
       | Looks like: "When tested on images marked with Google's SynthID,
       | the technique used in the example images above, Kassis says that
       | UnMarker successfully removed 79 percent of watermarks." From
       | https://spectrum.ieee.org/ai-watermark-remover
        
         | mudkipdev wrote:
         | We know what it looks like at least
         | https://www.reddit.com/r/nanobanana/comments/1o1tvbm/nano_ba...
        
       | willsmith72 wrote:
       | > Starting to roll out in the Gemini API and Google AI Studio
       | 
       | > Rolling out globally in the Gemini app
       | 
       | wanna be any more vague? is it out or not? where? when?
        
         | koakuma-chan wrote:
         | I don't see in the ai studio
        
           | WawaFin wrote:
           | I see it but when I use it says "Failed to count tokens,
           | model not found: models/gemini-3-pro-image-preview. Please
           | try again with a different model."
        
         | Archonical wrote:
         | Phased rollouts are fairly common in the industry.
        
         | ZeroCool2u wrote:
         | Already available in the Gemini web app for me. I have the
         | normal Pro subscription.
        
         | meetpateltech wrote:
         | Currently, it's rolling out in the Gemini app. When you use the
         | "Create image" option, you'll see a tooltip saying "Generating
         | image with Nano Banana Pro."
         | 
         | And in AI Studio, you need to connect a paid API key to use it:
         | 
         | https://aistudio.google.com/prompts/new_chat?model=gemini-3-...
         | 
         | > Nano Banana Pro is only available for paid-tier users. Link a
         | paid API key to access higher rate limits, advanced features,
         | and more.
        
       | myth_drannon wrote:
       | Adobe's stock is down 50% from last year's peak. It's humbling
       | and scary that entire industries with millions of jobs evaporate
       | in a matter of few years.
        
         | riskable wrote:
         | On the contrary, it's encouraging to know that maliciously
         | greedy companies like Adobe are getting screwed for being so
         | malicious and greedy :thumbsup:
         | 
         | I had second thoughts about this comment, but if I stopped
         | typing in the middle of it, I would've had to pay a
         | cancellation fee.
        
           | creata wrote:
           | Adobe, for all their faults, can hardly be said to be more
           | malicious or greedy than Google.
           | 
           | Adobe, at least, makes money by selling software. Google
           | makes money by capturing eyeballs; only incidentally does
           | anything they do benefit the user.
        
             | s1mon wrote:
             | Adobe makes money by renting software, not selling it.
             | There are many creatives that would disagree with your
             | ranking of who is more malicious or greedy.
        
         | cj wrote:
         | There's 2 takes here: First take is the AI is replacing jobs by
         | making existing workforce more efficient.
         | 
         | The 2nd take is AI is costing companies so much money, that
         | they need to cut workforce to pay for their AI investments.
         | 
         | I'm inclined to think the latter is represents what's happening
         | more than the former.
        
       | theoldgreybeard wrote:
       | The interesting tidbit here is SynthID. While a good first step,
       | it doesn't solve the problem of AI generated content NOT having
       | any kind of watermark. So we can prove that something WITH the ID
       | is AI generated but we can't prove that something without one
       | ISN'T AI generated.
       | 
       | Like it would be nice if all photo and video generated by the big
       | players would have some kind of standardized identifier on them -
       | but now you're left with the bajillion other "grey market" models
       | that won't give a damn about that.
        
         | morkalork wrote:
         | Labelling open source models as "grey market" is a heck of a
         | presumption
        
           | theoldgreybeard wrote:
           | It's why I used "scare quotes".
        
           | bigfishrunning wrote:
           | Every model is "grey market". They're all trained on data
           | without complying with any licensing terms that may exist, be
           | they proprietary or copyleft. Every major AI model is an
           | instance of IP theft.
        
         | markdog12 wrote:
         | I asked Gemini "dymamic view" how SynthID works:
         | https://gemini.google.com/share/62fb0eb38e6b
        
         | slashdev wrote:
         | If there was a standardized identifier, there would be software
         | dedicated to just removing it.
         | 
         | I don't see how it would defeat the cat and mouse game.
        
           | paulryanrogers wrote:
           | It doesn't have to be perfect to be helpful.
           | 
           | For example, it's trivial to post an advertisement without
           | disclosure. Yet it's illegal, so large players mostly comply
           | and harm is _less_ likely on the whole.
        
             | slashdev wrote:
             | You'd need a similar law around posting AI photos/videos
             | without disclosure. Which maybe is where we're heading.
             | 
             | It still won't prevent it, but it would prevent large
             | players from doing it.
        
           | aqme28 wrote:
           | I don't think it will be easy to just remove it. It's built
           | into the image and thus won't be the same every time.
           | 
           | Plus, any service good at reverse-image search (like Google)
           | can basically apply that to determine whether they generated
           | it.
           | 
           | There will always be a way to defeat anything, but I don't
           | see why this won't work for like 90% of cases.
        
             | VWWHFSfQ wrote:
             | There will be a model trained to remove synthids from
             | graphics generated by other models
        
             | flir wrote:
             | > I don't think it will be easy to just remove it.
             | 
             | Always has been so far. You add noise until the signal gets
             | swamped. In order to remain imperceptible it's a tiny
             | signal, so it's easy to swamp.
        
             | famouswaffles wrote:
             | It's an image. There's simply no way to add a watermark to
             | an image that's both imperceptible to the user and non-
             | trivial to remove. You'd have to pick one of those options.
        
               | fwip wrote:
               | I'm not sure that's correct. I'm not an expert, but
               | there's a lot of literature on digital watermarks that
               | are robust to manipulation.
               | 
               | It may be easier if you have an oracle on your end to say
               | "yes, this image has/does not have the watermark," which
               | could be the case for some proposed implementations of an
               | AI watermark. (Often the use-case for digital watermarks
               | assumes that the watermarker keeps the evaluation tool
               | secret - this lets them find, e.g, people who leak early
               | screenings of movies.)
        
               | aqme28 wrote:
               | That is patently false.
        
               | flir wrote:
               | So, uh... do you know of an implementation that has both
               | those properties? I'd be quite interested in that.
        
               | viraptor wrote:
               | https://arxiv.org/html/2502.10465v1
        
             | rcarr wrote:
             | You could probably just stick your image in another model
             | or tool that didn't watermark and have it regenerate the
             | image as accurately as possible.
        
               | pigpop wrote:
               | Exactly, a diffusion model can denoise the watermark out
               | of the image. If you wanted to be doubly sure you could
               | add noise first and then denoise which should completely
               | overwrite any encoded data. Those are trivial operations
               | so it would be easy to create a tool or service
               | explicitly for that purpose.
        
             | slashdev wrote:
             | It would be like standardizing a captcha, you make a single
             | target to defeat. Whether it is easy or hard is irrelevant.
        
             | dragonwriter wrote:
             | > I don't think it will be easy to just remove it.
             | 
             | No, but model training technology is out in the open, so it
             | will continue to be possible to train models and build
             | model toolchains that just don't incorporate watermarking
             | at all, which is what any motivated actor seeking to
             | mislead will do; the only thing watermarking will do is
             | train people to accept its absence as a sign of
             | reliability, increasing the effectiveness of fakes by
             | motivated bad actors.
        
         | echelon wrote:
         | This watermarking ceremony is useless.
         | 
         | We will always have local models. Eventually the Chinese will
         | release a Nano Banana equivalent as open source.
        
           | dragonwriter wrote:
           | > We will always have local models.
           | 
           | If watermarking becomes a legal mandate, it will inevitably
           | include a prohibition on distributing (and using and maybe
           | even possessing, but the distribution ban is the thing that
           | will have the most impact, since it is the part that is most
           | policable, and most people aren't going to be training their
           | own models, except, of course, the most motivated bad actors)
           | open models that do not include watermarking as a baked-in
           | model feature. So, for _most_ users, it 'll be much less
           | accessible (and, at the same time, it won't solve the
           | problem.)
        
             | ahtihn wrote:
             | I don't see how banning distribution would do anything:
             | distributing pirated games, movies, software is banned in
             | most countries and yet pirated content is trivial to find
             | for anyone who cares.
             | 
             | As long as someone somewhere is publishing models that
             | don't watermark output, there's basically nothing that can
             | stop those models from being used.
        
           | simonw wrote:
           | Qwen-Image-Edit is pretty good already:
           | https://simonwillison.net/2025/Aug/19/qwen-image-edit/
        
             | tezza wrote:
             | Qwen won the latest models round last month...
             | 
             | https://generative-ai.review/2025/09/september-2025-image-
             | ge... (non-pro Nano Banana)
        
         | staplers wrote:
         | have some kind of standardized identifier on them
         | 
         | Take this a step further and it'll be a personal identifying
         | watermark (only the company can decode). Home printers already
         | do this to some degree.
        
           | theoldgreybeard wrote:
           | yeah, personally identifying undetectable watermarks are
           | kindof a terrifying prospect
        
             | overfeed wrote:
             | It is terrifying, but inevitable. Perhaps AI companies
             | flooding the commons with excrement wasn't the best idea,
             | now we all have to suffer the consequences.
        
         | baby wrote:
         | It solves some problems! For example, if you want to run a
         | camgirl website based on AI models and want to also prove that
         | you're not exploiting real people
        
           | echelon wrote:
           | Your use case doesn't even make sense. What customers are
           | clamoring for that feature? I doubt any paying customer in
           | the market for (that product) cares. If the law cares, the
           | law has tools to inquire.
           | 
           | All of this is trivially easy to circumvent ceremony.
           | 
           | Google is doing this to deflect litigation and to preserve
           | their brand in the face of negative press.
           | 
           | They'll do this (1) as long as they're the market leader, (2)
           | as long as there aren't dozens of other similar products -
           | especially ones available as open source, (3) as long as the
           | public is still freaked out / new to the idea anyone can make
           | images and video of whatever, and (4) as long as the signing
           | compute doesn't eat into the bottom line once everyone in the
           | world has uniform access to the tech.
           | 
           | The idea here is that {law enforcement, lawyers, journalists}
           | find a deep fake {illegal, porn, libelous, controversial}
           | image and goes to Google to ask who made it. That only works
           | for so long, if at all. Once everyone can do this and the
           | lookup hit rates (or even inquiries) are < 0.01%, it'll go
           | away.
           | 
           | It's really so you can tell journalists "we did our very
           | best" so that they shut up and stop writing bad articles
           | about "Google causing harm" and "Google enabling the bad
           | guys".
           | 
           | We're just in the awkward phase where everyone is freaking
           | out that you can make images of Trump wearing a bikini, Tim
           | Cook saying he hates Apple and loves Samsung, or the South
           | Park kids deep faking each other into silly circumstances. In
           | ten years, this will be normal for everyone.
           | 
           | Writing the sentence "Dr. Phil eats a bagel" is no different
           | than writing the prompt "Dr. Phil eats a bagel". The former
           | has been easy to do for centuries and required the brain to
           | do some work to visualize. Now we have tools that
           | previsualize and get those ideas as pixels into the brain a
           | little faster than ASCII/UTF-8 graphemes. At the end of the
           | day, it's the same thing.
           | 
           | And you'll recall that various forms of written text - and
           | indeed, speech itself - have been illegal in various times,
           | places, and jurisdictions throughout history. You didn't
           | insult Caesar, you didn't blaspheme the medieval church, and
           | you don't libel in America today.
        
             | shevy-java wrote:
             | > What customers are clamoring for that feature? If the law
             | cares, the law has tools to inquire.
             | 
             | How can they distinguish from real people exploited to AI
             | models autogenerating everything?
             | 
             | I mean right now this is possible, largely because a lot of
             | the AI videos have shortcomings. But imagine in 5 years
             | from now on ...
        
               | krisoft wrote:
               | > How can they distinguish from real people exploited to
               | AI models autogenerating everything?
               | 
               | The people who care don't consume content which even just
               | plausibly looks like real people exploited. They wouldn't
               | consume the content even if you pinky promised that the
               | exploited looking people are not real people. Even if you
               | digitally signed that promise.
               | 
               | The people who don't care don't care.
        
               | dragonwriter wrote:
               | > How can they distinguish from real people exploited to
               | AI models autogenerating everything?
               | 
               | Watermarking by compliant models doesn't help this much
               | because (1) models without watermarking exist and can
               | continue to be developed ( _especially_ if absence of a
               | watermark is treated as a sign of authenticity), so you
               | cannot rely on AI fakery being watermarked, and (2) AI
               | models can be used for video-to-video generation without
               | changing much of the source, so you can 't rely on
               | something _accurately_ watermarked as  "AI-generated" not
               | being based in actual exploitation.
               | 
               | Now, if the watermarking includes provenance information,
               | _and_ you require certain types of content to be
               | watermarked not just as AI using a known watermarking
               | system, but by a registered AI provider with regulated
               | input data safety guardrails and /or retention
               | requirements, and be traceable to a registered user,
               | and...
               | 
               | Well, then it does _something_ when it is present,
               | largely by creating a new content gatekeepiing cartel.
        
           | dragonwriter wrote:
           | > It solves some problems! For example, if you want to run a
           | camgirl website based on AI models and want to also prove
           | that you're not exploiting real people
           | 
           | So, you exploit real people, but run your images through a
           | realtime AI video transformation model doing either a close-
           | to-noop transformation or something like changing the
           | background so that it can't be used to identify the actual
           | location if people _do_ figure out you are exploiting real
           | people, and then you have your real exploitation watermarked
           | as AI fakery.
           | 
           | I don't think this is solving a problem, unless you mean a
           | problem _for the would-be exploiter_.
        
         | xnx wrote:
         | SynthID has been in use for over 2 years.
        
         | akersten wrote:
         | Some days it feels like I'm the only hacker left who _doesn 't_
         | want government mandated watermarking in creative tools. Were
         | politicians 20 years ago as overreative they'd have demanded
         | Photoshop leave a trace on anything it edited. The amount of
         | moral panic is off the charts. It's still a computer, and we
         | still shouldn't trust everything we see. The fundamentals
         | haven't changed.
        
           | mlmonkey wrote:
           | You do know that every color copier comes with the ability to
           | identify US currency and would refuse to copy it? And that
           | every color printer leaves a pattern of faint yellow dots on
           | every printout that uniquely identifies the printer?
        
             | potsandpans wrote:
             | And that's not a good thing.
        
               | mlmonkey wrote:
               | I'm just responding to this by OP:
               | 
               | > Were politicians 20 years ago as overreative they'd
               | have demanded Photoshop leave a trace on anything it
               | edited.
        
               | fwip wrote:
               | Why not? Like, genuinely.
        
               | potsandpans wrote:
               | I generally don't think that's it's good or just for a
               | government to collude with manufacturers to track/trace
               | it's citizens without consent or notice. And even if
               | notice was given, I'd still be against it
               | 
               | The arguments put forward by people generally I don't
               | find compelling -- for example, in this thread around
               | protecting against counterfeit.
               | 
               | The "force" applied to address these concerns is totally
               | out of proportion. Whenever these discussions happen, I
               | feel like they descend into a general viewpoint, "if we
               | could technically solve any possible crime, we should do
               | everything in our power to solve it."
               | 
               | I'm against this viewpoint, and acknowledge that that
               | means _some crime_ occurs. That's acceptable to me. I
               | don't feel that society is correctly structured to
               | "treat" crime appropriately, and technology has outpaced
               | our ability to holistically address it.
               | 
               | Generally, I don't see (speaking for the US) the highest
               | incarceration rate in the world to be a good thing, or
               | being generally effective, and I don't believe that
               | increasing that number will change outcomes.
        
               | fwip wrote:
               | Gotcha, thanks for the explanation. I think that
               | personally, I agree with your stance that it's a bad kind
               | of thing for government to do, but in practice I find
               | that I'm in favor of the effects of this specific law.
               | (Perhaps I need to do some thinking.)
        
               | oblio wrote:
               | It depends on how you're looking at it. For the people
               | not getting handed counterfeit currency, it's probably a
               | good thing.
        
               | fwip wrote:
               | Also probably good for the people trying to counterfeit
               | money with a printer, better not to end up in jail for
               | that.
        
               | wing-_-nuts wrote:
               | Nope, having a stable, trusted currency trumps whatever
               | productive use one could have for a anonymous, currency
               | reproducing color printer
        
             | sabatonfan wrote:
             | Is this something strictly with the US currency notes or is
             | the same true for other countries currency as well?
        
               | SaberTail wrote:
               | It's most notes, and for EU and US notes (as well as some
               | others), it's based on a certain pattern on the bills:
               | https://en.wikipedia.org/wiki/EURion_constellation
        
           | darkwater wrote:
           | > It's still a computer, and we still shouldn't trust
           | everything we see. The fundamentals haven't changed.
           | 
           | I think that by now it should be crystal clear to everyone
           | that it matters _a lot_ the sheer scale a new technology
           | permits for $nefarious_intent.
           | 
           | Knives (under a certain size) are not regulated. Guns are
           | regulated in most countries. Atomic bombs are definitely
           | regulated. They can all kill people if used badly, though.
           | 
           | When a photo was faked/composed with old tech, it was
           | relatively easy to spot. With photoshop, it became more
           | complicated to spot it but at the same time it wasn't easy to
           | mass-produce altered images. Large models are changing the
           | rules here as well.
        
             | hk__2 wrote:
             | > Knives (under a certain size) are not regulated. Guns are
             | regulated in most countries. Atomic bombs are definitely
             | regulated
             | 
             | I don't think this is a good comparison: knives are easy to
             | produce, guns a bit harder, atomic bombs definitely harder.
             | You should find something that is as easy to produce as a
             | knife, but regulated.
        
               | darkwater wrote:
               | The "product" to be regulated here is the LLM/model
               | itself, not its output.
               | 
               | Or, if you see the altered photo as the "product", then
               | the "product" of the knife/gun/bomb is the damage it
               | creates to a human body.
        
               | wing-_-nuts wrote:
               | >You should find something that is as easy to produce as
               | a knife, but regulated.
               | 
               | The DEA and ATF have entered the chat
        
               | withinboredom wrote:
               | They can leave, plain water fits this bill.
        
             | csallen wrote:
             | I think we're overreacting. Digital fakes will proliferate,
             | and we'll freak out bc it's new. But after a certain amount
             | of time, we'll just get used to it and realize that the
             | world goes on, and whatever major adverse effects actually
             | aren't that difficult to deal with. Which is not the case
             | with nuclear proliferation or things like that.
             | 
             | The story of human history is newer generations freaking
             | about progress and novel changes that have never been seen
             | before. And later generations being perfectly okay with it
             | and adapting to a new style of life.
        
               | darkwater wrote:
               | In general I concur but the adaptation doesn't come out
               | of the blue or just only because people get used to it
               | but also because countermeasures are taken, regulations
               | are written and adjustments are made to reduce the
               | negative impact. Also the hyperconnected society is still
               | relatively new and I'm not sure we have adapted for it
               | yet.
        
               | Yokohiii wrote:
               | Photography and motion pictures were deemed evil. Video
               | games made you a mass murderer. Barcodes somehow seem to
               | affect your health or the freshness of vegetables. The
               | earth is flat.
               | 
               | The issue is that some people believe shit someone tells
               | them and they deny any facts. This has been _always_ a
               | problem. I am all in for labeling content as AI
               | generated. But it wont help with people trying to be
               | malicious or who choose to be dumb. Forcing to watermark
               | every picture made neither, it will turn into a massive
               | problem, its a solid pillar towards full scale
               | surveillance. Just alone the fact that analog cams become
               | by default less trustworthy then any digital device with
               | watermarking is terrible. Even worse, phones will
               | eventually have AI upscaling and similar by default, you
               | can 't even make an accurate picture without anything
               | being tagged AI. The information is eventually worthless.
        
               | SV_BubbleTime wrote:
               | It shouldn't be that we panic about it and regulate the
               | hell out.
               | 
               | We could use the opportunity to deploy robust systems of
               | verification and validation to all digital works. One
               | that allows for proving authenticity while respecting
               | privacy if desired. For example... it's insane in the US
               | we revolve around a paper social security number that we
               | know damn well isn't unique. Or that it's a massive pain
               | in the ass for most people to even check the hash of a
               | download.
               | 
               | Guess which we'll do!
        
               | sebzim4500 wrote:
               | I think the long term effect will be that photos and
               | videos no longer have any evidentiary value legally or
               | socially, absent a trusted chain of custody.
        
               | spot wrote:
               | instead of making everyone watermark the AI, we should
               | have cameras that take and sign pictures securely.
               | requires hardware!
               | 
               | https://petapixel.com/2024/01/02/cameras-content-
               | authenticit...
               | 
               | seems like a better way
        
             | commandlinefan wrote:
             | > a new technology permits for $nefarious_intent
             | 
             | But people with actual nefarious intent will easily be able
             | to remove these watermarks, however they're implemented.
             | This is copy protection and key escrow all over again - it
             | hurts honest people and doesn't even slow down bad people.
        
           | dpark wrote:
           | I suspect watermarking ends up being a net negative, as
           | people learn to trust that lack of a watermark indicates
           | authenticity. Propaganda won't have the watermark.
        
           | mh- wrote:
           | Politicians absolutely _were_ doing this 20-30 years ago.
           | Plenty of folks here are old enough to remember debates on
           | Slashdot around the Communications Decency Act, Child Online
           | Protection Act, Children 's Online Privacy Protection Act,
           | Children's Internet Protection Act, et al.
           | 
           | https://en.wikipedia.org/wiki/Communications_Decency_Act
        
             | SV_BubbleTime wrote:
             | It's annoying how effective "for the children" is. That
             | peiole really just turn off their brains for that.
        
               | Nifty3929 wrote:
               | Nobody is doing it just "for the children" - that's just
               | a fig-leaf justification for doing what many people want
               | anyway: surveillance, tracking, and censorship (of other
               | people, of course - just the bad ones doing/saying bad
               | things).
               | 
               | IOW - People aren't turning off their brains about "for
               | the children" - they just want it anyway and don't think
               | any further than that.
        
           | rcruzeiro wrote:
           | Try photocopying some US dollar bills.
        
           | llbbdd wrote:
           | Unless they've recently changed it, Photoshop will actually
           | refuse to open or edit images of at least US banknotes.
        
           | BeetleB wrote:
           | Easy to say until it impacts you in a bad way:
           | 
           | https://www.nbcnews.com/tech/tech-news/ai-generated-
           | evidence...
           | 
           | > "My wife and I have been together for over 30 years, and
           | she has my voice everywhere," Schlegel said. "She could
           | easily clone my voice on free or inexpensive software to
           | create a threatening message that sounds like it's from me
           | and walk into any courthouse around the country with that
           | recording."
           | 
           | > "The judge will sign that restraining order. They will sign
           | every single time," said Schlegel, referring to the
           | hypothetical recording. "So you lose your cat, dog, guns,
           | house, you lose everything."
           | 
           | At the moment, the only alternative is courts simply _never_
           | accept photo /video/audio as evidence. I know if I were a
           | juror I wouldn't.
           | 
           | At the same time, yeah, watermarks won't work. Sure, Google
           | can add a watermark/fingerprint that is impossible to remove,
           | but there will be tools that won't put such
           | watermarks/fingerprints.
        
             | mkehrt wrote:
             | Testimony is evidence. I don't think most cases have _any_
             | physical evidence.
        
               | BeetleB wrote:
               | A lot of cases rely heavily on security camera footage.
        
           | Der_Einzige wrote:
           | HN is full of authoritarian bootlickers who can't imagine
           | that people can exist without a paternalistic force to keep
           | them from doing bad things.
        
           | Nifty3929 wrote:
           | In the past, and maybe even to this very day - all color
           | printers print hidden watermarks in faint yellow ink to
           | assist with forensic identification of anything printed. Even
           | for things printed in B&W (on a color printer).
           | 
           | https://en.wikipedia.org/wiki/Printer_tracking_dots
           | 
           | Yes, can we not jump on the surveillance/tracking/censorship
           | bandwagon please?
        
         | swatcoder wrote:
         | The incentive for commercial providers to apply watermarks is
         | so that they can safely route and classify generated content
         | when it gets piped back in as training or reference data from
         | the wild. That it's something that some users want is mostly
         | secondary, although it is something they can earn some social
         | credit for by advertising.
         | 
         | You're right that there will existed generated content without
         | these watermarks, but you can bet that all the commercial
         | providers burning $$$$ on state of the art models will
         | gradually coalesce around some means of widespread by-
         | default/non-optional watermarking for content they let the
         | public generate so that they can all avoid drowning in their
         | own filth.
        
         | mortenjorck wrote:
         | Reminder that even in the hypothetical world where every AI
         | image is digitally watermarked, and all cameras have a TPM that
         | writes a hash of every photo to the blockchain, there's nothing
         | to stop you from pointing that perfectly-verified camera at a
         | screen showing your perfectly-watermarked AI image and taking a
         | picture.
         | 
         | Image verification has never been easy. People have been
         | airbrushed out of and pasted into photos for over a century; AI
         | just makes it easier and more accessible. Expecting a "click to
         | verify" workflow is unreasonable as it has ever been; only
         | media literacy and a bit of legwork can accomplish this task.
        
           | fwip wrote:
           | Competent digital watermarks usually survive the 'analog
           | hole'. Screen-cam resistant watermarks have been in use since
           | at least 2020, and if memory serves, back to 2010 when I
           | first starting reading about them, but I don't recall what it
           | was called back then.
        
             | simonw wrote:
             | I just tried asking Gemini about a photo I took of my
             | screen showing an image I edited with Nano Banana Pro...
             | and it said "All or part of the content was generated with
             | Google AI. SynthID detected in less than 25% of the image".
             | 
             | Photo-of-a-screen:
             | https://gemini.google.com/share/ab587bdcd03e
             | 
             | It reported 25-50% for the image without having been
             | through that analog hole:
             | https://gemini.google.com/share/022e486fd6bf
        
               | fwip wrote:
               | Thanks for testing it!
        
         | zaidf wrote:
         | This is what C2PA is trying to do: https://c2pa.org/
        
         | losvedir wrote:
         | I'm sure Apple will roll something out in the coming years. Now
         | that just anyone can easily AI themselves into a picture in
         | front of the Eiffel tower, they'll want a feature that will let
         | their users prove that they _really_ took that photo in front
         | of the Eiffel tower (since to a lot of people sharing that
         | you're on a Paris vacation is the point, more than the
         | particular photo).
         | 
         | I bet it will be called "Real Photos" or something like that,
         | and the pictures will be signed by the camera hardware. Then
         | iMessage will put a special border around it or something, so
         | that when people share the photos with other Apple users they
         | can prove that it was a real photo taken with their phone's
         | camera.
        
           | pigpop wrote:
           | Does anyone other than you actually care about your vacation
           | photos?
           | 
           | There used to be a joke about people who did slideshows (on
           | an actual slide projector) of their vacation photos at
           | parties.
        
           | panarky wrote:
           | _> a real photo taken with their phone 's camera_
           | 
           | How "real" are iPhone photos? They're also computationally
           | generated, not just the light that came through the lens.
           | 
           | Even without any other post-processing, iPhones generate
           | gibberish text when attempting to sharpen blurry images, they
           | delete actual textures and replace them with smooth, smeared
           | surfaces that look like a watercolor or oil paintings, and
           | combine data from multiple frames to give dogs five legs.
        
             | wyre wrote:
             | Don't be a pedant. You know very well there is a big
             | different between a photo taken on an iPhone and a photo
             | edited with Nano Banana.
        
           | omnimus wrote:
           | this already exists. its called 35mm film camera.
        
             | infthi wrote:
             | Can't wait for a machine printing images on film by
             | exposing it with a laser.
        
           | mwest217 wrote:
           | This exists, the standard is called C2PA, Google added
           | support for it in the Pixel 10. I was surprised and
           | disappointed that Apple didn't add support for it in the most
           | recent iPhone! A few physical cameras are starting to support
           | it too (https://yawnbox.eu/blog/c2pa-camera/)
        
         | vunderba wrote:
         | Regardless of how you feel about this kind of steganography, it
         | seems clear that outside of a courtroom, deepfakes still have
         | the potential to do massive damage.
         | 
         | Unless the watermark randomly replaces objects in the scene
         | with bananas, these images/videos will still spread like
         | wildfire on platforms like TikTok, where the average netizen's
         | idea of due diligence is checking for a six-fingered hand... at
         | best.
        
         | lazide wrote:
         | It solves a real problem - if you have something sketchy, the
         | big players can repudiate it, the authorities can more formally
         | define the black market, and we can have a 'war on deepfakes'
         | to further enable the authorities in their attempts to control
         | the narratives.
        
         | NoMoreNicksLeft wrote:
         | I don't believe that you can do this for photography. For AI-
         | images, if the embedded data has enough information (model
         | identification and random seed), one can prove that it was AI
         | by recreating it on the fly and comparing. How do you prove
         | that a photographic image was created by a CCD? If your AI-
         | generated image were good enough to pass, then hacking hardware
         | (or stealing some crypto key to sign it) would "prove" that it
         | was a real photograph.
         | 
         | Hell, it might even be possible for some arbitrary photographs
         | to come up with an AI prompt that produces them or something
         | similar enough to be indistinguishable to the human eye,
         | opening up the possibility of "proving" something is fake even
         | when it was actually real.
         | 
         | What you want just can't work, not even from a theoretical or
         | practical standpoint, let alone the other concerns mentioned in
         | this thread.
        
         | gigel82 wrote:
         | We need to be super careful with how legislation around this is
         | passed and implemented. As it currently stands, I can totally
         | see this as a backdoor to surveillance and government
         | overreach.
         | 
         | If social media platforms are required by law to categorize
         | content as AI generated, this means they need to check with the
         | public "AI generation" providers. And since there is no agreed
         | upon (public) standard for imperceptible watermarks hashing
         | that means the content (image, video, audio) in _its entirety_
         | needs to be uploaded to the various providers to check if it 's
         | AI generated.
         | 
         | Yes, it sounds crazy, but that's the plan; imagine every image
         | you post on Facebook/X/Reddit/Whatsapp/whatever gets uploaded
         | to Google / Microsoft / OpenAI / UnnamedGovernmentEntity / etc.
         | to "check if it's AI". That's what the current law in Korea and
         | the upcoming laws in California and EU (for August 2026)
         | require :(
        
         | DenisM wrote:
         | It would be more productive for camera manufacturers to embed a
         | per-device digital signature. Those care to prove their image
         | is genuine could publish both pre and post processed images for
         | transparency.
        
         | domoritz wrote:
         | I don't understand why there isn't an obvious, visible
         | watermark at all. Yes, one could remove it but let's assume 95%
         | of people don't bother removing the visible watermark. It would
         | really help with seeing instantly when an image was AI
         | generated.
        
         | benlivengood wrote:
         | You have to validate from the other direction. Let CCD sensors
         | sign their outputs, and digital photo-editing produce a chain
         | of custody with further signatures.
         | 
         | Maybe zero knowledge proofs could provide anonymity, or a
         | simple solution is to ship the same keys in every camera model,
         | or let them use anonymous sim-style cards with N-month
         | certificate validity. Not everyone needs to prove the veracity
         | of their photos, but make it cheap enough and most people
         | probably will by default.
        
         | smusamashah wrote:
         | This is what the SynthID signature looks like on Nano Banana
         | images https://www.reddit.com/r/nanobanana/comments/1o1tvbm/
         | 
         | And if it can be seen like that, it should be removeable too.
         | There are more examples in that thread.
        
           | frumiousirc wrote:
           | > more examples in that thread
           | 
           | Some supposition: A Fourier amplitude image should show that
           | pattern as peaks at a certain angle/radius location. The
           | exact location may be part of the identification scheme.
           | Running peak finding on the Fourier image and then zeroing
           | out the frequencies in the peak should remove the pattern.
           | Modeling the shape of the peak would allow mimicking the
           | application of a legit SynthID signature.
           | 
           | If anyone tries/tried this already, I'd love to see the
           | results.
        
         | ethmarks wrote:
         | > now you're left with the bajillion other "grey market" models
         | that won't give a damn about that.
         | 
         | Exactly. When the barrier to entry for training a okay-ish AI
         | model (not SOTA, obviously) is only a few thousand compute
         | hours on H100s, you couldn't possibly hope to police the
         | training of 100% of new models. Not to mention that lots of
         | existing models are already out there are fully open-source.
         | There will always be AI models that don't adhere to watermark
         | regulations, especially if they were created a country that
         | doesn't enforce your regulations.
         | 
         | You can't hope to solve the problem of non-watermarked AI
         | completely. And by solving it partially by mandating that the
         | big AI labs add a unified watermark, you condition people to be
         | even more susceptible to AI images because "if it was AI, it
         | would have a watermark". It's truly a no-win situation.
        
       | dangoodmanUT wrote:
       | I've had nano banana pro for a few weeks now, and it's the most
       | impressive AI model I've ever seen
       | 
       | The inline verification of images following the prompt is
       | awesome, and you can do some _amazing_ stuff with it.
       | 
       | It's probably not as fun anymore though (in the early access
       | program, it doesn't have censoring!)
        
         | echelon wrote:
         | LLMs might be a dead end, but we're going to have amazing
         | images, video, and 3D.
         | 
         | To me the AI revolution is making visual media (and music)
         | catch up with the text-based revolution we've had since the
         | dawn of computing.
         | 
         | Computers accelerated typing and text almost immediately, but
         | we've had really crude tools for images, video, and 3D despite
         | graphics and image processing algorithms.
         | 
         | AI really pushes the envelope here.
         | 
         | I think images/media alone could save AI from "the bubble" as
         | these tools enable everyone to make incredible content if you
         | put the work into it.
         | 
         | Everyone now has the ingredients of Pixar and a music
         | production studio in their hands. You just need to learn the
         | tools and put the hours in and you can make chart-topping songs
         | and Hollywood grade VFX. The models won't get you there by
         | themselves, but using them in conjunction with other tools and
         | understanding as to what makes good art - that can and will do
         | it.
         | 
         | Screw ChatGPT, Claude, Gemini, and the rest. _This_ is the
         | exciting part of AI.
        
           | dangoodmanUT wrote:
           | I wouldn't call LLMs a dead end, they're so useful as-is
        
             | echelon wrote:
             | LLMs are useful, but they've hit a wall on the path to
             | automating our jobs. Benchmark scores are just getting
             | better at test taking. I don't see them replacing software
             | engineers without overcoming obstacles.
             | 
             | AI for images, video, music - these tools can already make
             | movies, games, and music today with just a little bit of
             | effort by domain experts. They're 10,000x time and cost
             | savers. The models and tools are continuing to get better
             | on an obvious trend line.
        
               | atonse wrote:
               | I'm literally a software engineer, and a business owner.
               | I don't think about this in binary terms (replacement or
               | not), but just like CMS's replaced the jobs of people
               | that write HTML by hand to build websites, I think whole
               | classes of software development will get democratized.
               | 
               | For example, I'm currently vibe coding an app that will
               | be specific to our company, that helps me run all the
               | aspects of our business and integrates with our systems
               | (so it'll integrate with quickbooks for invoicing, etc),
               | and help us track whether we have the right insurance
               | across multiple contracts, will remind me about contract
               | deadlines coming up, etc.
               | 
               | It's going to combine the information that's currently in
               | about 10 different slightly out of sync spreadsheets,
               | about 2 dozen google docs/drive files, and multiple
               | external systems (Gusto, Quickbooks, email, etc).
               | 
               | Even though I could build all this manually (as a
               | software developer), I'd never take the time to do it,
               | because it takes away from client work. But now I can
               | actually do it because the pace is 100x faster, and in
               | the background while I'm doing client work.
        
           | Sevii wrote:
           | How can LLMs be a dead end? The last improvement in LLMs came
           | out this week.
        
           | dyauspitr wrote:
           | Doesn't seem like a dead end at all. Once we can apply LLMs
           | to the physical world and its outputs control robot movements
           | it's essentially game over for 90% of the things humans do,
           | AGI or not.
        
         | refulgentis wrote:
         | "Inline verification of images following the prompt is awesome,
         | and you can do some _amazing_ stuff with it." - could you
         | elaborate on this? sounds fascinating but I couldn't grok it
         | via the blog post (like, it this synthid?)
        
           | dangoodmanUT wrote:
           | It uses Gemini 3 inline with the reasoning to make sure it
           | followed the instructions before giving you the output image
        
         | vunderba wrote:
         | I'd be curious about how well the inline verification works -
         | an easy example is to have it generate a 9-pointed star, a
         | classic example that many SOTA models have difficulties with.
         | 
         | In the past, I've deliberately stuck a Vision-language model in
         | a REPL with a loop running against generative models to try to
         | have it verify/try again because of this exact issue.
         | 
         | EDIT: Just tested it in Gemini - it either didn't use a VLM to
         | actually look at the finished image or the VLM itself failed.
         | 
         | Output:                 I have finished cross-referencing the
         | image against the user's specific requests. The primary focus
         | was on confirming that the number of points on the star
         | precisely matched the requested nine. I observed a clear visual
         | representation of a gold-colored star with the exact point
         | count that the user specified, confirming a complete and
         | precise match.
         | 
         | Result:                 Bog standard star with *TEN POINTS*.
        
         | bn-l wrote:
         | How did you get early access?!
        
         | spaceman_2020 wrote:
         | Genuinely believe that images are 99.5% solved now and unless
         | you're extremely keen eyed, you won't be able to tell AI images
         | from real images now
        
           | xenospn wrote:
           | Eyebrows, eyelashes and skin texture are still a dead
           | giveaway for AI generated portraits. Much harder to tell the
           | difference with everything else.
        
             | danielbln wrote:
             | I asked Nano Banana Pro to generate a selfie that looks
             | realistic (in terms of skin, lighting etc.). I feel the
             | irises still give it away, but apart from that...
             | https://imgur.com/a/hPcMybi
        
       | ZeroCool2u wrote:
       | I tried the studio ghibli prompt on a photo my me and my wife in
       | Japan and it was... not good. It looked more like a hand drawn
       | sketch made with colored pencils, but none of the colors were
       | correct. Everything was a weird shade of yellow/brown.
       | 
       | This has been an oddly difficult benchmark for Gemini's NB
       | models. Googles images models have always been pretty bad at the
       | studio ghibli prompt, but I'm shocked at how poorly it performs
       | at this task still.
        
         | jeffbee wrote:
         | I wonder ... do you think they might not be chasing that
         | particular metric?
        
           | ZeroCool2u wrote:
           | Sure! But it's weird how far off it is in terms of
           | capability.
        
         | skocznymroczny wrote:
         | Could be they are specifically training against it. There was
         | some controversy about "studio ghibli style". Similarly how in
         | the early days of Stable Diffusion "Greg Rutkowski style" was a
         | very popular prompt to get a specific look. These days modern
         | Stable Diffusion based models like SD 3 or FLUX mostly removed
         | references to specific artists from their datasets.
        
         | xnx wrote:
         | You might try it again with style transfer: 1 image of style to
         | apply to 1 target image
        
           | ZeroCool2u wrote:
           | This is a good idea, will give it a try!
        
       | wnevets wrote:
       | does it handle transparency yet?
        
         | niwrad wrote:
         | This is a good question -- I've wanted transparency from image
         | models for a while. One work around is to ask for a "green
         | screen" and to key out the background but it doesn't always
         | work very cleanly.
        
           | wnevets wrote:
           | > One work around is to ask for a "green screen" and to key
           | out the background but it doesn't always work very cleanly.
           | 
           | I recently tried that and the model (not nano pro) added the
           | green background as a gradient.
        
       | Shalomboy wrote:
       | The SynthID check for fishy photos is a step in the right
       | direction, but without tighter integration into everyday tooling
       | its not going to move the needle much. Like when I hold the power
       | button on my Pixel 9, It would be great if it could identify
       | synthetic images on the screen before I think to ask about it.
       | For what its worth it would be great if the power button shortcut
       | on Pixel did a lot more things.
        
         | Deathmax wrote:
         | You sort of can on Android, but it's a few steps:
         | 
         | 1. Trigger Circle to Search with long holding the home
         | button/bar
         | 
         | 2. Select the image
         | 
         | 3. Navigate to About this image on the Google search top bar
         | all the way to the right - check if it says "Made by Google AI"
         | - which means it detected the SynthID watermark.
        
       | scottlamb wrote:
       | The rollout doesn't seem to have reached my userid yet. How
       | successful are people at getting these things to actually produce
       | useful images? I was trying recently with the (non-Pro) Nano
       | Banana to see what the fuss was about. As a test case, I tried to
       | get it to make a diagram of a zipper merge (in driving), using
       | numbered arrows to indicate what the first, second, third, etc.
       | cars should do.
       | 
       | I had trouble reliably getting it to...
       | 
       | * produce just two lanes of traffic
       | 
       | * have all the cars facing the same way--sometimes even within
       | one lane they'd be facing in opposite directions.
       | 
       | * contain the construction within the blocked-off area. I think
       | similarly it wouldn't understand which side was supposed to be
       | blocked off. It'd also put the lane closure sign in lanes that
       | were supposed to be open.
       | 
       | * have the cars be in proportion to the lane and road instead of
       | two side-by-side within a lane.
       | 
       | * have the arrows go in the correct direction instead of veering
       | into the shoulder or U-turning back into oncoming traffic
       | 
       | * use each number once, much less on the correct car
       | 
       | This is consistent with my understanding of how LLMs work, but I
       | don't understand how you can "visualize real-time information
       | like weather or sports" accurately with these failings.
       | 
       | Below is one of the prompts I tried to go from scratch to an
       | image:
       | 
       | > You are an illustrator for a drivers' education handbook. You
       | are an expert on US road signage and traffic laws. We need to
       | prepare a diagram of a "zipper merge". It should clearly show
       | what drivers are expected to do, without distracting elements.
       | 
       | > First, draw two lanes representing a single direction of travel
       | from the bottom to the top of the image ( _not_ an entire two-way
       | road), with a dotted white line dividing them. Make sure there 's
       | enough space for the several car-lengths approaching a
       | construction site. Include only the illustration; no title or
       | legend.
       | 
       | > Add the construction in the right lane only near the top (far
       | side). It should have the correct signage for lane closure and
       | merging to the left as drivers approach a demolished section. The
       | left lane should be clear. The sign should be in the closed lane
       | or right shoulder.
       | 
       | > Add cars in the unclosed sections of the road. Each car should
       | be almost as wide as its lane.
       | 
       | > Add numbered arrows #1-#5 indicating the next cars to pass to
       | the left of the "lane closed" sign. They should be in the
       | direction the cars will move: from the bottom of the illustration
       | to the top. One car should proceed straight in the left lane,
       | then one should merge from the right to the left (indicate this
       | with a curved arrow), another should proceed straight in the
       | left, another should merge, and so on.
       | 
       | I did have a bit better luck starting from a simple image and
       | adding an element to it with each prompt. But on the other hand,
       | when I did that it wouldn't do as well at keeping space for
       | things. And sometimes it just didn't make any changes to the
       | image at all. A lot of dead ends.
       | 
       | I also tried sketching myself and having it change the
       | illustration style. But it didn't do it completely. It turned
       | some of my boxes into cars but not necessarily all of them. It
       | drew a "proper" lane divider over my thin dotted line but still
       | kept the original line. etc.
        
         | flyinglizard wrote:
         | I think you tried using the wrong tool. Nano Banana is for
         | editing, not generating (there's Imagen for that).
        
           | scottlamb wrote:
           | Imagen4 did no better. edit: example
           | https://imgur.com/Dl8PWgm with a so-so result: four lanes,
           | cars at least facing the same way, lane block looks good,
           | weird extra division in the center, some numbers repeated,
           | one arrow going straight into construction, one arrow going
           | backwards
           | 
           | edit: or Imagen4 Ultra. https://imgur.com/a/xr2ElXj cars
           | facing opposite directions within a lane, 2-way (4 lanes
           | total), double-ended arrows, confused disaster. pretty
           | though.
        
         | woobar wrote:
         | Nano Banana is focused on editing. But the Pro version handles
         | your prompt much better. First image is Pro, second is 2.5
         | 
         | https://imgur.com/a/3PDUIQP
        
           | scottlamb wrote:
           | Wow, that top image is actually quite good! Interestingly, I
           | just got into Pro and got a worse result than yours.
           | https://imgur.com/a/ENNk68B ... and it really seems to just
           | vary by attempt even with the exact same prompt.
        
         | scottlamb wrote:
         | Ooh, I just got offered the new version on
         | https://gemini.google.com/. Plugged in that exact prompt, got
         | this:
         | 
         | https://imgur.com/a/ENNk68B
         | 
         | Much better than previous attempts. Still has an extra lane
         | with the cars on the right cutting off the cars in the middle.
         | Still has the numbers in the wrong order.
        
         | KalMann wrote:
         | I'd try a some more if I were you. I saw an example of
         | generated infographic that was greatly improved over anything
         | I've seen an image generator do before. What you desire seems
         | in the realm of possibility.
        
       | simianparrot wrote:
       | What is up with these product names!? Antigravity? Nano Banana?
       | 
       | Not just are they making slop machines, they seem to be run by
       | them.
       | 
       | I am too old for this shit.
        
       | evrenesat wrote:
       | I've tried to repaint the exterior of my house. More than 20
       | times with very detailed prompts. I even tried to optimize it
       | with Claude. No matter what, every time it added one, two or
       | three extra windows to the same wall.
        
         | cj wrote:
         | I tried this in AI studio just now with nano banana.
         | 
         | Results: https://imgur.com/a/9II0Aip
         | 
         | The white house was the original (random photo from Google).
         | The prompt was "What paint color would look nice? Paint the
         | house."
        
           | vunderba wrote:
           | Guess they ran out of paint - notice the upper window.
        
             | cj wrote:
             | Oops. Original link wasn't using the Pro version. Edited
             | the comment with an updated link.
        
           | swatcoder wrote:
           | > (random photo from Google)
           | 
           | Careful with that kind of thing.
           | 
           | Here, it mostly poisons your test, because that exact photo
           | probably exists in the underlying training data and the
           | trained network will be more or less optimized on working
           | with it. It's really the same consideration you'd want to
           | make when testing classifiers or other ML techs 10 years ago.
           | 
           | Most people taking to a task like this will be using an
           | original photo -- missing entirely from any training date,
           | poorly framed, unevenly lit, etc -- and you need to be
           | careful to capture as much of that as possible when trying to
           | evaluate how a model will work in that kind of use case.
           | 
           | The failure and stress points for AI tools are generally kind
           | of alien and unfamiliar because the way they operate is
           | totally different than the way a human operates, and if
           | you're not especially attentive to their weird failure shapes
           | and biases when you want to test them, or you'll easily get
           | false positives (and false negatives) that lead you to
           | misleading conclusions.
        
             | cj wrote:
             | Yea, the base image was the first google image result for
             | the search term "house". So definitely in the training set.
        
           | ceejayoz wrote:
           | > The prompt was "What paint color would look nice? Paint the
           | house."
           | 
           | At some point, this is probably gonna result in you coming
           | home to a painted house and a big bill, lol.
        
         | fumeux_fume wrote:
         | I also tried that in the past with poor results. I just tried
         | it this morning with nano banana pro and it nailed it with a
         | very short prompt: "Repaint the house white with black trim. Do
         | not paint over brick."
        
         | grantpitt wrote:
         | Huh, can you share a link? I tried here:
         | https://gemini.google.com/share/e753745dfc5d
        
           | evrenesat wrote:
           | https://gemini.google.com/share/79fe1a38e440
        
             | gandreani wrote:
             | Maybe somewhere in the original comment it would have been
             | fair to mention you can barely see the house in the
             | original photo. This is actually a hilarious complaint
        
               | Jaxan wrote:
               | Maybe. But this is not an edge case. I consider this
               | genuine use of the marketed tool.
        
               | evrenesat wrote:
               | That cannot be a valid excuse. Other than adding extra
               | windows to the clearly visible wall, it's obvious that
               | model perfectly capable to "see" the house. It just
               | cannot "believe" that there can be a big empty wall on a
               | garden house.
        
             | WesleyJohnson wrote:
             | https://gemini.google.com/share/3b4d2cd55778
        
         | Workaccount2 wrote:
         | I don't know what it is with Gemini (and even other models) but
         | I swear they must be doing some kind of active load-dependant
         | quanitization or a/b/c/d testing behind the scenes, because
         | sometimes the model is stellar and hitting everything, and
         | other times it's tripping all over itself.
         | 
         | The most effective fix I have found is that when the model is
         | acting dumb, just turn it off and come back in the few hours to
         | a new chat and try again.
        
           | jamil7 wrote:
           | Yeah I think they all shed under heavy load as part of some
           | scaling strategy.
        
         | dyauspitr wrote:
         | Nano Banana Pro is a chatGPT 3.5 to 4 tier leap.
        
         | Nemi wrote:
         | I have this problem selecting Pro, but if I use 2.5 Flash it
         | does a great job at these things. I am not sure why Pro does
         | not work as well.
        
         | seanweng wrote:
         | shameless plug: try the tool i build! https://rerenderai.com
        
       | vunderba wrote:
       | I'll be running it through my GenAI Comparison benchmark shortly
       | - but so far it seems to be failing on the same tests that the
       | original Nano Banana struggled with (such as SHRDLU).
       | 
       | https://genai-showdown.specr.net/image-editing
        
       | throwacct wrote:
       | Google needs to pace themselves. AI studio, Antigravity, Banana,
       | Banana Pro, Grape Ultra, Gemini 3, etc. This information overload
       | don't do them any good whatsoever.
        
         | arecsu wrote:
         | Agree. I can't keep up with it, it's hard to grasp my head
         | around them, where to go to actually use them, etc
        
         | jasonjmcghee wrote:
         | Grape Ultra?
        
           | throwacct wrote:
           | That part was a joke to illustrate the point.
        
             | jiggawatts wrote:
             | https://aienergydrink.ai/products/grape-ultra
        
         | xnx wrote:
         | Powell Doctrine, but for AI. No one should dispute that Google
         | is the leader in every(?) category of AI: LLM, image gen, video
         | editing, world models, etc.
        
         | abixb wrote:
         | I feel it's strategic, like a massive DDoS/"shock and awe"
         | style attack on competitors. Gotta love it as PROsumers though!
        
         | shevy-java wrote:
         | They are riding the current buzzword wave. It'll eventually
         | subside. And 80% of it will end up on Google's impressive
         | software graveyard:
         | 
         | https://killedbygoogle.com/
        
         | reddalo wrote:
         | It reminds me of AWS services: I can't tell what they are
         | because they've been named by a monkey with a typewriter.
        
         | crazygringo wrote:
         | Why? They're mostly different markets. Most people using Nano
         | Banana Pro aren't using Antigravity.
         | 
         | A cluster of launches reinforces the idea that Google is
         | growing and leading in a bunch of areas.
         | 
         | In other words, if it's having so many successes it feels like
         | overload, that's an excellent narrative. It's not like it's
         | going to prevent people from using the tools.
        
           | nwsm wrote:
           | Google will never beat the "sunset after 2 years" allegations
           | on all products that don't have "Google __" in the name
        
           | dogleash wrote:
           | > A cluster of launches reinforces the idea that Google is
           | growing and leading in a bunch of areas.
           | 
           | What in the Gemini 3 powered astroturf bot is this?
           | 
           | They probably just had an internal mandate to ship by end of
           | year.
           | 
           | > if it's having so many successes it feels like overload,
           | that's an excellent narrative
           | 
           | Yeah, if this is the best spin you've got I'm doubling down.
           | Those teams were on the chopping block.
        
             | crazygringo wrote:
             | > _Please don 't post insinuations about astroturfing,
             | shilling, brigading, foreign agents, and the like. It
             | degrades discussion and is usually mistaken._
             | 
             | https://news.ycombinator.com/newsguidelines.html
             | 
             | Also, if you knew anything, you'd know that AI product
             | teams are the _least_ likely to be on the chopping block
             | right now.
        
         | sib wrote:
         | Stock market seems to agree with their strategy....
        
           | imiric wrote:
           | ... and has a tendency to disagree past the Peak of Inflated
           | Expectations.
        
           | skeeter2020 wrote:
           | Maybe? or lemmings following BH purchase of $4B in Google
           | stock this week assuming "Buffett only buys value stocks; it
           | must be ready to grow!"
           | 
           | https://finance.yahoo.com/news/warren-buffetts-berkshire-
           | hat...
        
         | tnolet wrote:
         | Jules, Vertex...
        
         | tmoertel wrote:
         | This cluster of launches might not be intentional. It could
         | just be a bunch of independent teams all trying to get their
         | launches out before the EOY deadline.
        
         | glemmaPaul wrote:
         | I mean, you gotta diversify your portfolio so later on you can
         | push some of them to the graveyard.
         | 
         | /s
        
       | jasonjmcghee wrote:
       | Maybe I'm an obscure case, but I'm just not sure what I'd use an
       | image generation model for.
       | 
       | For people that use them (regularly or not), what do you use them
       | for?
        
         | vunderba wrote:
         | Mostly highly specific images in blog posts but I also use it
         | for occasional comics.
         | 
         | https://mordenstar.com/portfolio/gorgonzo
         | 
         | https://mordenstar.com/portfolio/brawny-tortillas
         | 
         | https://mordenstar.com/portfolio/ms-frizzle-lava
        
           | jasonjmcghee wrote:
           | I'm kind of reading between the lines, but sounds like "for
           | fun" which makes sense / what I generally expected for why
           | people use it
        
             | vunderba wrote:
             | I think that's a fair assessment. I write a lot of bizarre
             | fiction in my spare time, so Text2Image tools are a fun way
             | to see my visions visualized.
             | 
             | Like this one:
             | 
             |  _A piano where the keyboard is wrapped in a circular
             | interface surrounding a drummer 's stool connected to a
             | motor that spins the seat, with a foot-operated pedal to
             | control rotation speed for endless glissandos._
        
         | cj wrote:
         | Random examples:
         | 
         | 1) I have a tricep tendon injury and ChatGPT wants me to check
         | my tricep reflex. I have no idea where on the elbow you're
         | supposed to tap to trigger the reflex.
         | 
         | 2) I'm measuring my body fat using skin fold calipers. Show me
         | were the measurement sites are.
         | 
         | 3) I'm going hiking. Remind me how to identify poison ivy and
         | dangerous snakes.
         | 
         | 4) What would I look like with a buzz cut?
        
           | jasonjmcghee wrote:
           | First three are interesting - all question / knowledge based
           | where the answer is a picture. Hadn't really considered this.
        
             | mrguyorama wrote:
             | The answer is a picture that almost certainly already
             | exists.
             | 
             | Why would you want a program that just makes one up
             | instead?
        
               | phatfish wrote:
               | So you can feel 1000x better about yourself when 1000x
               | more resources are used to create an extra special image
               | just for you. Rather than the canonical one served from
               | the Wikipedia (or Google image search) cache.
        
           | paulglx wrote:
           | You should never rely on AI to do 1, 2 or 3, especially a
           | sloppy model like this.
        
         | hemloc_io wrote:
         | porn is probably the a biggest one?
         | 
         | but concept art, try-it-on for clothes or paint, stock art, etc
        
         | xnx wrote:
         | Nano Banana is more of an image editing model, which probably
         | has more broad use cases for non-generative applications:
         | interior decorating, architecture, picking wardrobes, etc.
        
           | jasonjmcghee wrote:
           | Yeah... For some reason none of these are use cases in my day
           | to day life. That said, I also don't open Photoshop very
           | often. And maybe that's what this is meant to replace.
        
             | xnx wrote:
             | Not for everyone everyday, but a good tool to have in the
             | toolbox. I recently was very easily able to mock up what a
             | certain Christmas decoration would look like on the house.
             | By next year, I'm sure that feature will be part of the
             | product page.
        
           | vunderba wrote:
           | Definitely, but don't sleep on its generative capacities
           | either. You can give it a image and instruct it "Use the
           | attached image purely as a stylistic reference" and then
           | proceed to use it as a regular generative model.
        
             | xnx wrote:
             | Indeed. Is Nano Banana now Google flagship image gen model
             | (over Imagen 4)?
        
               | vunderba wrote:
               | In my tests it does outscore Imagen3 and Imagen4 even in
               | the generative capacity, but my benchmark is more focused
               | around prompt adherence. I'd wager that for certain
               | photorealistic tests Imagen4 is probably better.
               | 
               | https://genai-showdown.specr.net/?models=i3,i4,nb
        
         | esafak wrote:
         | I'm creating a team T-shirt from a bunch of kids drawings. The
         | model has synthesize a bunch of disparate drawings into a
         | cohesive concept, incorporate the team's name in the
         | appropriate color and font, and make it simple enough for a
         | T-shirt.
        
         | TheAceOfHearts wrote:
         | My most regular use-case is generating silly memes in group
         | chats. If someone posts something meme-worthy or I come up with
         | a creative response, image generation is good for one-off
         | throwaway memes. A recent example was an "official license to
         | opine on sociology", following someone arguing about
         | credentialism.
         | 
         | Recently I also started using image generation models to
         | explore ideas for what changes to make in my paintings.
         | Although generally I don't like the suggestions it makes,
         | sometimes it provides me with creative ideas of techniques that
         | are worth experimenting with.
         | 
         | One way to approach thinking about it is that it's good for
         | exploring permutations in an idea-space.
        
         | hooverd wrote:
         | Nonconsenual pornography is the killer app.
        
       | shevy-java wrote:
       | Not gonna lie - this is pretty cool.
       | 
       | But ... it comes from Google. My goal is to eventually degoogle
       | completely. I am not going to add any more dependency - I am way
       | too annoyed at having to use the search engine (getting
       | constantly worse though), google chrome (long story ...) and
       | youtube.
       | 
       | I'll eventually find solutions to these.
        
       | H1Supreme wrote:
       | This is really impressive. As a former designer, I'm equally
       | excited that people will be able to generate images like this
       | with a prompt, and sad that there will be much less incentive for
       | people to explore design / "photoshopping" as a craft or a
       | career.
       | 
       | At the end of the day, a tool is a tool, and the computer had the
       | same effect on the creative industry when people started using
       | them in place of illustrating by hand, typesetting by hand, etc.
       | I don't want my personal bias to get in the way too much, but
       | every nail that AI hammers into the creative industry's coffin is
       | hard to witness.
        
         | anilgulecha wrote:
         | I feel you. Infact, IMO, SWE1 level coding industry seems to be
         | a couple years lagging on this aspect.
         | 
         | The trouble is that learning fundamentals now is a large trough
         | to go past, just the way grade 3-10 children learn their math
         | fundamentals despite there being calculators. It's no longer
         | "easy mode" in creative careers.
        
       | ruralfam wrote:
       | Just last night I was using Gemini "Fast" to test its output for
       | a unique image we would have used in some consumer research if
       | there had been a good stock image back in the day. I have been
       | testing this prompt since the early days of AI images. The
       | improvement in quality has been pretty remarkable for the same
       | prompt. Composition across this time has been consistent. What I
       | initially thought was "good enough" now is... fantastic. Just so
       | many little details got more life-like w/ each new generation.
       | Funnily enough, our images must be 3:2 aspect ratio. I kept
       | asking GFast to change its square Fast output to 3:2. It kept
       | saying it would, but each image was square or nearly square.
       | GFast in the end was very apologetic, and said it would alert
       | about this issue. Today I read that GPro does aspect ratios.
       | Tried the same prompt again burning up some "Thinking" credits,
       | and got another fantastically life-like image in 3:2. We have a
       | new project coming up. We have relied entirely on stock or in
       | some cases custom shot images to date. Now, apart from the time
       | needed to get the prompts right whilst meeting with the client, I
       | cannot see how stock or custom images can compete. I mean the
       | GPro images -- again which is very specific to an unusual prompt
       | -- is just "Wow". Want to emphasize again -- we are looking for
       | specific details that many would not. So the thoughts above are
       | specific to this. Still, while many faults can be found with AI,
       | Nano Banana is certainly proven itself to me.
       | 
       | edit: I was thinking about this, and am not sure I even saw Pro3
       | as my image option last night. Today it was clearly there.
        
       | jimlayman wrote:
       | Time to expand my creation catalog. Lets see what we can get of
       | out this pro version. It seems this week is for big AI
       | announcements from Google
        
       | anentropic wrote:
       | Is there an "in joke" to this name that I am too old to get? Or
       | it's just a whimsically random name?
        
         | dullcrisp wrote:
         | I believe it's an internal code name that stuck.
        
           | Jowsey wrote:
           | To expand, it comes from the stealth name it was given on
           | LMArena I believe. The model made news while still in
           | "stealth mode" and so Google capitalised on the PR they'd
           | already built around that and just launched it officially
           | with the same name.
        
             | anentropic wrote:
             | I see, naturally this is the first I've heard of it ;)
        
         | kraig911 wrote:
         | nano banano pronano.
        
           | kraig911 wrote:
           | be fi fo famo nano
        
           | werdnapk wrote:
           | Nani Banani, Nanu Bananu, Nano Banano...
        
       | mmaunder wrote:
       | Oh what a day. What a lovely day.
       | 
       | https://www.youtube.com/watch?v=5mZ0_jor2_k
       | 
       | Honestly I think this is exactly how we're all feeling right now.
       | Racing towards an unknown horizon in a nitrous powered dragster
       | surrounded by fire tornadoes.
        
       | ovo101 wrote:
       | Nano Banana Pro sounds like classic Google branding: quirky name,
       | serious tech underneath. I'm curious whether the "Pro" here is
       | about actual professional-grade features or just marketing
       | polish. Either way, it's another reminder that naming can shape
       | expectations as much as specs.
        
       | minimaxir wrote:
       | I...worked on the detailed Nano Banana prompt engineering
       | analysis for months
       | (https://news.ycombinator.com/item?id=45917875)...and...Google
       | just...Google released a new version.
       | 
       | Nano Banana Pro _should_ work with my gemimg package
       | (https://github.com/minimaxir/gemimg) without pushing a new
       | version by passing:                   g =
       | GemImg(model="gemini-3-pro-image-preview")
       | 
       | I'll add the new output resolutions and other features ASAP.
       | However, looking at the pricing (https://ai.google.dev/gemini-
       | api/docs/pricing#standard_1), I'm definitely not changing the
       | default model to Pro as $0.13 per 1k/2k output will make it a
       | tougher sell.
       | 
       | EDIT: Something interesting in the docs:
       | https://ai.google.dev/gemini-api/docs/image-generation#think...
       | 
       | > The model generates up to two interim images to test
       | composition and logic. The last image within Thinking is also the
       | final rendered image.
       | 
       | Maybe that's partially why the cost is higher: it's hard to tell
       | if intermediate images are billed in addition to the output.
       | However, this could cause an issue with the base gemimg and have
       | it return an intermediate image instead of the final image
       | depending on how the output is constructed, so will need to
       | double-check.
        
         | sandGorgon wrote:
         | this is pretty cool! have you found success with image editing
         | in nano banana - i mean photoshop-like stuff. from your article
         | i seem to wonder if nano banana is good for editing versus
         | generating new images.
        
           | vunderba wrote:
           | That _IS_ the use-case for Nano Banana (as opposed to pure
           | generative like Imagen4).
           | 
           | In my benchmarks, Nano-Banana scores a 7 out of 12. Seedream4
           | managed to outpace it, but Seedream can also introduce slight
           | tone mapping variations. NB is the gold standard for highly
           | localized edits.
           | 
           | Comparisons of Seedream4, NanoBanana, gpt-image-1, etc.
           | 
           | https://genai-showdown.specr.net/image-editing
        
             | simonw wrote:
             | I tried your "Remove all the brown pieces of candy from the
             | glass bowl." prompt against Nano Banana Pro and it
             | converted them to green, which I think is a pass by your
             | criteria. Original Nano Banana had failed that test because
             | it changed the composition of the M&Ms.
             | 
             | https://static.simonwillison.net/static/2025/brown-mms-
             | remov...
        
               | vunderba wrote:
               | Thanks Simon - I'm in the middle of re-running all my
               | prompts through NB Pro at the moment. Nice to know it's
               | already edged out the original. It also passed the SHRDLU
               | test (swapping colored blocks) without cheating and just
               | changing the colors. I'll have an update to the site
               | shortly!
               | 
               |  _EDIT: Finished the comparisons. NB Pro scored a few
               | more points than NB which was already super impressive._
               | 
               | https://genai-showdown.specr.net/image-
               | editing?models=nb,nbp
        
         | oblio wrote:
         | It looks nice, what are people using the package for?
        
         | swyx wrote:
         | btw you should get on their Trusted Testers program, they do
         | give early heads up
         | 
         | GDM folks, get Max on!
        
         | spyspy wrote:
         | This reminds me of the journalist working for months on
         | uncovering Trump's dirty business just for Trump himself to
         | admit the entire thing in a tweet.
        
           | wahnfrieden wrote:
           | It's written to mimic that style but without meaning that the
           | work has been done for them, just that there is new work to
           | be done, making it an odd perhaps unconscious reference
        
         | visioninmyblood wrote:
         | yes they are pricey but the price will go down over time and
         | then you can switch. vlm.run got access as early customers and
         | are releasing it for free with unlimited generations(till they
         | are bottlenecked by google). some results here combining image
         | gen(Nano Banana pro) with video gen(Veo 3.1) in a single chat
         | https://chat.vlm.run/c/1c726fab-04ef-47cc-923d-cb3b005d6262.
         | This combined the synth generation of a person and made the
         | puppet dance. Quite impressive
        
         | ashraymalhotra wrote:
         | Minor clarification, the cost for every input image is $0.0011,
         | not $0.06.
        
           | Taek wrote:
           | I would consider that a major clarification
        
           | minimaxir wrote:
           | I was going off the footnote of "Image input is set at 560
           | tokens or $0.067 per image" but 560 * 2 / 1_000_000 is indeed
           | $0.0011 so I have no idea where the $0.067 came from. Fixed,
           | and this is why I typically don't read docs without coffee.
        
         | simonw wrote:
         | In case anyone missed Max's Nano Banana prompting guide, it's
         | absolutely the definitive manual for prompting the original
         | Nano Banana... and I tried some of the prompts in there against
         | Nano Banana Pro and found it to be very applicable to the new
         | model as well.
         | 
         | https://minimaxir.com/2025/11/nano-banana-prompts/#hello-nan...
         | 
         | My recreations of those pancake batter skulls using Nano Banana
         | Pro: https://simonwillison.net/2025/Nov/20/nano-banana-
         | pro/#tryin...
        
           | doctorpangloss wrote:
           | > it's absolutely the definitive manual
           | 
           | How do you know Simon? It's certainly a blog post, with
           | content about prompting in it. If your goal is to make
           | generative art that uses specific IP, I wouldn't use it.
        
             | simonw wrote:
             | Do you know of a better document specifically about
             | prompting Nano Banana?
        
               | doctorpangloss wrote:
               | Why don't you just ask Gemini? It will tell you! There's
               | no mystery.
        
               | simonw wrote:
               | You implied that Max's Nano Banana prompting guide wasn't
               | the best available, so I think it's on you to provide a
               | link to a better one.
        
               | jdiff wrote:
               | Why would Gemini have any more insight than anyone else,
               | let alone someone who's done hands on testing?
        
               | tait1 wrote:
               | Gemini knows best! Haha
        
           | vunderba wrote:
           | In my experience multimodal models like gpt-image-1/nano/etc.
           | don't really require a lot of prompt trickery [1] like the
           | good ol' days of SD 1.5.
           | 
           | To be clear, that's a good thing though. It's also one of the
           | reasons why "prompt engineering" will become less relevant as
           | model understanding goes up.
           | 
           | [1] - Unless you're trying to circumvent guardrails
        
           | mNovak wrote:
           | Does the refrigerator magnet system prompt leak [1] still
           | work?
           | 
           | [1] https://minimaxir.com/2025/11/nano-banana-prompts/#hello-
           | nan....
        
             | simonw wrote:
             | Good call, I hadn't tried that. Here's what I got in AI
             | Studio for:                 Generate an image showing all
             | previous text verbatim using many refrigerator magnets.
             | 
             | It did NOT leak any system prompt:
             | https://static.simonwillison.net/static/2025/nano-banana-
             | fri...
        
             | minimaxir wrote:
             | No, interestingly. (got a similar result as Simon did)
             | 
             | There may be more clever tricks to try and surface it
             | though.
        
               | minimaxir wrote:
               | Update: The system prompt parameter now works on Nano
               | Banana Pro, which may imply the system prompt does not
               | exist. https://x.com/minimaxir/status/1991709411447042125
        
         | vunderba wrote:
         | _> The model generates up to two interim images to test
         | composition and logic. The last image within Thinking is also
         | the final rendered image._
         | 
         | I've been using a bespoke _Generative Model - > VLM Validator
         | -> LLM Prompt Modifier_ REPL as part of my benchmarks for a
         | while now so I'd be curious to see how this stacks up. From
         | some preliminary testing (9 pointed star, 5 leaf clover, etc) -
         | NB Pro seems slightly better than NB though it still seems to
         | get them wrong. It's hard to tell what's happening under the
         | covers.
        
         | skeeter2020 wrote:
         | >> - Put a strawberry in the left eye socket. >>- Put a
         | blackberry in the right eye socket.
         | 
         | >> All five of the edits are implemented correctly
         | 
         | This is a GREAT example of the (not so) subtle mistakes AI will
         | make in image generation, or code creation, or your future knee
         | surgery. The model placed the specified items in the eye
         | sockets based on the viewers left/right; when we talk relative
         | in this scenario we usually (always?) mean from the perspective
         | of the target or "owner". Doctors make this mistake too (they
         | typically mark the correct side with a sharpie while the
         | patient is still alert) but I'd be more concerned if we're
         | "outsourcing" decision making without adequate oversight.
         | 
         | https://minimaxir.com/2025/11/nano-banana-prompts/#hello-nan...
        
           | Jabrov wrote:
           | I don't know if that's so much a mistake as it is ambiguity
           | though? To me, using the viewer's perspective in this case
           | seems totally reasonable.
           | 
           | Does it still use the viewer's perspective if the prompt
           | specifies "Put a strawberry in the _patient's left eye_"? If
           | it does, then you're onto something. Otherwise I completely
           | disagree with this.
        
             | ComputerGuru wrote:
             | "Eye on the left" is different from "the left eye". First
             | can be ambiguous, second really isn't.
        
               | simonw wrote:
               | I think "the left eye" in this particular case (a photo
               | of a skull made of pancake batter) is still very slightly
               | ambiguous. "The skull's left eye" would not be.
        
               | recursive wrote:
               | I guess there's some ambiguity regarding whether or not
               | this can be ambiguous. Because it seems like it can to
               | me.
        
               | Dylan16807 wrote:
               | Interesting, because I would say the opposite. "On the
               | left" suggests left of image, "the left eye" could be any
               | version of left.
        
             | withinboredom wrote:
             | "The right socket" can only be implied one way when talking
             | about a body just like you only have one right hand despite
             | the fact that it is on my left when looking at you.
        
               | pphysch wrote:
               | "Plug into right power socket"
               | 
               | Same language, opposite meaning because of a particular
               | noun + context.
               | 
               | I think the only thing obvious here is that there is no
               | obvious solution other than adding lots of clarification
               | to your prompt.
        
               | withinboredom wrote:
               | I think you missed the entire point?
        
               | swores wrote:
               | No, they just disagree with you.
        
               | withinboredom wrote:
               | How do you disagree with having a right and a left hand?
        
               | TylerE wrote:
               | GP is using right as in "correct", not directionality.
        
               | marcellus23 wrote:
               | I think the fact that anyone in this thread thinks it's
               | ambiguous is proof by definition that it's ambiguous.
        
               | esrauch wrote:
               | "Right hand" is practically a bigram that has more
               | meaning, since handedness is such a common topic.
               | 
               | Also context matters, if you're talking to someone you
               | would say "right shoulder" for _their_ right since you
               | know it's an observer with different vantage point.
               | Talking about a scene in a photo "the right shoulder" to
               | me would more often mean right portion of the photo even
               | if it was the person's left shoulder.
        
               | Dylan16807 wrote:
               | Having one person in the frame isn't enough to
               | unambiguously put us into the "talking about a body"
               | context.
        
           | CGMthrowaway wrote:
           | >This is a GREAT example of the (not so) subtle mistakes AI
           | will make in image generation, or code creation, or your
           | future knee surgery.
           | 
           | The mistake is in the prompting (not enough information). The
           | AI did the best it could
           | 
           | "What's the biggest known planet" "Jupiter" "NO I MEANT IN
           | THE UNIVERSE!"
        
             | bigstrat2003 wrote:
             | No, this is squarely on the AI. A human would know what you
             | mean without specific instructions.
        
               | jaggederest wrote:
               | I would not, I would clarify, and I think I'm a human.
        
               | siffin wrote:
               | Seems like you're making a judgment based on your own
               | experience, but as another commenter pointed out, it was
               | wrong. There are plenty of us out there who would
               | confirm, because people are too flawed to trust. Humans
               | double/triple check, especially under higher stakes
               | conditions (surgery).
               | 
               | Heck, humans are so flawed, they'll put the things in the
               | wrong eye socket even knowing full well exactly where
               | they should go - something a computer literally couldn't
               | do.
        
               | rullelito wrote:
               | Why on earth would the fallback when a prompt is under
               | specified be to do something no human expects?
        
               | rodrigodlu wrote:
               | Intelligence in my book includes error correction.
               | Questioning possible mistakes is part of wisdom.
               | 
               | So the understanding that AI and HI are different
               | entities altogether with only a subset of communication
               | protocols between them will become more and more obvious,
               | like some comments here are already implicitly telling.
        
               | emp17344 wrote:
               | "People are too flawed to trust"? You've lost the plot.
               | People are trusted to perform complex tasks every single
               | minute of every single day, and they overwhelmingly
               | perform those tasks with minimal errors.
        
               | danso wrote:
               | If the instructions were actually specific, e.g. _Put a
               | blackberry in its right eye socket_ , then yes, most
               | humans would know what that meant. But the instructions
               | were not that specific: _in the right eye socket_
        
               | TylerE wrote:
               | Or be even more explicit: _Put a strawberry in the
               | person's right eye socket._
        
               | adastra22 wrote:
               | If you asked me right now what the biggest known planet
               | was, I'd think Jupiter. I'd assume you were talking about
               | our solar system ("known" here implying there might be
               | more planets out in the distant reaches).
        
               | recursive wrote:
               | But different humans would know what you meant
               | differently. Some would have known it the same way the AI
               | did.
        
               | CGMthrowaway wrote:
               | I would be amused to see you test this theory with 100
               | men on the street
        
               | nkmnz wrote:
               | Yeah, just like humans always know what _you_ mean.
        
             | sebzim4500 wrote:
             | It doesn't affect your point but technically since the IAU
             | are insane, exoplanets aren't technically planets and
             | Jupiter _is_ the largest planet in the universe.
        
               | MangoToupe wrote:
               | I suppose it was too much to hope that chatbots could be
               | trained to avoid pointless pedantry.
        
               | fragmede wrote:
               | They've been trained on every web forum on the Internet.
               | How could it be possible for them to avoid that?
        
             | throawayonthe wrote:
             | asking "x-most known y" and not expecting a global answer
             | is odd
        
               | kridsdale3 wrote:
               | Every answer concerning planets is global.
        
               | retsibsi wrote:
               | Maybe! https://en.wikipedia.org/wiki/Toroidal_planet
        
           | 0x457 wrote:
           | Right, that's why one should use "put a strawberry in the
           | portside eye socket" and "put a strawberry in the starboard
           | side socket"
        
             | iammattmurphy wrote:
             | When it doubt, always use nautical terminology
        
           | minimaxir wrote:
           | I meant to add a clarification to that point (because the
           | ambiguity is a valid counterpoint), thanks for the reminder.
        
           | oasisbob wrote:
           | There's a classic well-illustrated book, _How to Keep Your
           | Volkswagen Alive_, which spends a whole illustrated page at
           | the beginning building up a reference frame for working on
           | the vehicle. Up is sky, down is ground, front is always
           | vehicle's front, left is always vehicle's left.
           | 
           | Sounds a bit silly to write it out, but the diagram did a
           | great job removing ambiguity when you expect someone to be
           | laying on the ground in a tight place looking backwards,
           | upside down.
           | 
           | Also feels important to note that in the theatre, there is
           | stage-right and stage-left, jargon to disambiguate even
           | though the jargon expects you to know the meaning to
           | understand it.
        
             | bo1024 wrote:
             | Port and starboard
             | 
             | I guess car people use "driver side" and passenger side",
             | but the same car might be sold in mirror image versions
        
           | lifthrasiir wrote:
           | That was a big problem when I was toying around the original
           | Nano Banana. I always prompted the perspective of the
           | (imaginary) camera, and yet NB often interpreted that as that
           | of the target, giving no way to select the opposite side.
           | Since the selected side is generally closer to the camera, my
           | usual workaround is to force the side far from the camera.
           | And yet that was not perfect.
        
           | crazygringo wrote:
           | > _when we talk relative in this scenario we usually
           | (always?) mean from the perspective of the target or
           | "owner"._
           | 
           | I dunno... I feel pretty confident 99% percent of people
           | would do the same thing, and put the strawberry in the eye
           | socket to _our_ left, the viewer 's.
           | 
           | You _really_ have to be trained explicitly to put yourself in
           | the subject 's shoes, and very few people are. To me, the
           | model is correctly following the instructions most people
           | will mean.
           | 
           | And it's not even incorrect. "The left x" is linguistically
           | ambiguous. If you say "the left flower", it's obviously the
           | flower to _our_ left. So when you say  " _the_ left eye
           | socket ", the eye socket to our left is a valid
           | interpretation. If they had said _their_ or _its_ left eye
           | socket, then it 's more arguable that it must be from the
           | subject's side. But that's not the case in this example.
        
             | threetonesun wrote:
             | There's a puzzle in the latest Indiana Jones game that
             | exploits the fact that yes, most people would do the same
             | thing.
        
         | Terretta wrote:
         | Your wrapper is awesome and still relevant.
         | 
         | > _" I...worked on the detailed Nano Banana prompt engineering
         | analysis for months"_
         | 
         | Early in four decades of tech innovation I wasted time layering
         | on fixes for clear deficiencies in a snowballing trend's tech
         | offerings. If it's a big enough trend to have well funded
         | competitors, just wait. The concern is likely not unique, and
         | will likely be solved tomorrow.
         | 
         | I realized it's better to learn adaptive/defensive techniques,
         | giving your product resilience to change. Your goal is that
         | when surfing the change waves you can pick a point you like
         | between rock solid and cutting edge and surf there safely.
         | 
         | Invest that "remediate their thing" time in "change resilience"
         | instead - pays dividends from then on. It can be argued your
         | tool is in this camp!
         | 
         | // Getting better at this also helps you with zero days.
        
         | minimaxir wrote:
         | I just pushed gemimg 0.3.2 which adds image_size support for
         | Nano Banana Pro, and I ran a few tests on some of the images in
         | the blog. In my testing, Nano Banana Pro correctly handled most
         | of the image generation errors noted in my blog post:
         | https://x.com/minimaxir/status/1991580127587921971
         | 
         | - Fibonacci magnets: code is correctly indented and the syntax
         | highlighting atleast tries giving variables, numbers, and
         | keywords different colors.
         | 
         | - Make me a Studio Ghibli: actually does style transfer
         | correctly, and does it better than ChatGPT ever did.
         | 
         | - Rendering a webpage from HTML: near-perfect recreation of the
         | HTML, including text layout and element sizing.
         | 
         | That said, there may be regressions where even with prompt
         | engineering, the generated images which are more photorealistic
         | look _too_ good and land back into the uncanny valley. I haven
         | 't decided if I'm going to write a follow up blog post yet.
         | 
         | The system prompt hacking trick doesn't work with Nano Banana
         | Pro unfortunately.
        
           | simonw wrote:
           | That result for rendering HTML to an image (the Counter Info
           | one) is pretty impressive.
           | 
           | https://github.com/minimaxir/gemimg/blob/main/docs/files/cou.
           | .. to this:
           | https://x.com/minimaxir/status/1991580127587921971 - see also
           | https://minimaxir.com/2025/11/nano-banana-prompts/#image-
           | pro...
        
       | srameshc wrote:
       | My experience with Nano Banana is to constantly get consistent
       | image when dealing with muliple objects in a image, I mean
       | creating consistent sequence etc.
       | 
       | We spent a lot of money trying but eventully gave up. If it is
       | easier in Pro, then probably it stands a chance.
        
       | jdoliner wrote:
       | It's a funny juxtaposition to slap the "Pro" label on it which
       | makes it sound more enterprisey but leave the name as Nano
       | Banana.
        
       | mortenjorck wrote:
       | This is the first image model I've used that passed my piano
       | test. It actually generated an image of a keyboard with the
       | proper pattern of black keys repeated per octave - every other
       | model I've tried this with since the first Dall-E has struggled
       | to render more than a single octave, usually clumping groups of
       | two black keys or grouping them four at a time. Very impressive
       | grasp of recursive patterns.
        
         | vunderba wrote:
         | Periodic motion (groups of repeating patterns) always tend to
         | degrade at some point. Maintaining coherence over 88 keys is
         | impressive.
        
         | crat3r wrote:
         | If you ask it for anything outside of the standard 88 key set
         | it falls short. For instance
         | 
         | "Generate a piano, but have the left most key start at middle
         | C, and the notes continue in the standard order up (D, E, F, G,
         | ...) to the right most key"
         | 
         | The above prompt will be wrong, seemingly every time. The model
         | has no understanding of the keys or where they belong, and it
         | is not able to intuit creating something within the actual
         | confines of how piano notes are patterned.
         | 
         | "Generate a piano but color every other D key red"
         | 
         | This also wrong, every time, with seemingly random keys being
         | colored.
         | 
         | I would imagine that a keyboard is difficult to render (to some
         | extent) but I also don't think its particularly interesting
         | since it is a fully standardized object with millions of
         | pictures from all angles in existence to learn from right?
        
           | vunderba wrote:
           | Yep - one of my goto bench marks is a "historical piano" -
           | meaning the naturals are black and the sharps/flats are
           | white.
           | 
           | https://imgur.com/a/SZbzsYv
        
         | skybrian wrote:
         | I got one pass and one fail, then ran out of quota.
        
       | visioninmyblood wrote:
       | Wow! I was able to combine Nano Banana Pro and Veo 3.1 video
       | generation in a single chat and it produced great results.
       | https://chat.vlm.run/c/38b99710-560c-4967-839b-4578a4146956.
       | Really cool model
        
         | vunderba wrote:
         | Neat use-case, though the sword literally telescopically
         | inverts itself at the beginning of the scene like a light saber
         | where you would have expected it to be drawn from its scabbard.
         | 
         | I'd be interested to see how Wan 2.2 First/Last frame handles
         | those images though...
        
           | visioninmyblood wrote:
           | yeah sadly veo 3.1 has not caught up to the image generation
           | capabilities. May be we need to work on how to make video
           | generation more physically consistent. but the image
           | generation results from banana pro are great.
        
             | visioninmyblood wrote:
             | another interesting use case with synth https://chat.vlm.ru
             | n/c/1c726fab-04ef-47cc-923d-cb3b005d6262. made a puppet
             | from a image of a model and made the puppet dance.
        
               | djmips wrote:
               | The feet are doing unusual movements. Reminds me of leaf
               | node cumulative error in overcompressed hierarchical
               | animation.
        
               | visioninmyblood wrote:
               | yeah the video models still do not understand physics the
               | way humans do. We are getting there one step at a time.
               | By the way, I am seeing a lot of people complain about
               | google billing not working well. I was able to generate
               | these for free without signing. Look at the results and
               | try to come up with your own failure and working use
               | cases.
        
           | esafak wrote:
           | That is an interesting error actually. It happened because
           | both orientations of the sword are visually plausible, but
           | not abrupt transitions from one to the other; there needs to
           | be physical continuity.
           | 
           | Here is a reproduction of the Matrix bullet time shot with
           | and without pose guidance to illustrate the problem:
           | https://youtu.be/iq5JaG53dho?t=1125
        
         | patates wrote:
         | I see many recent accounts posting vlm.run links and if this is
         | what I suspect it is, that's normally not allowed here.
        
           | jsnell wrote:
           | If you have concerns about spam, the right thing to do is to
           | email the mods at hn@ycombinator.com with examples.
        
       | simonw wrote:
       | This thing's ability to produce entire infographics from a short
       | prompt is _really_ impressive, especially since it can run extra
       | Google searches first.
       | 
       | I tried this prompt:                 Infographic explaining how
       | the Datasette open source project works
       | 
       | Here's the result: https://simonwillison.net/2025/Nov/20/nano-
       | banana-pro/#creat...
        
         | bn-l wrote:
         | Is the infographic accurate in terms of the way datasette
         | wprks?
        
           | OtherShrezzing wrote:
           | It's subtly incorrect. R/w permissions for example are
           | described incorrectly on some nodes.
        
             | mikepurvis wrote:
             | Then the question becomes, can it incorporate targeted
             | feedback, or is it a oneshot-or-bust affair?
             | 
             | My experience is that ChatGPT is very good at iterating on
             | text (prose, code) but fairly bad at iterating on images.
             | It struggles to integrate small changes, choosing instead
             | to start over from scratch, with wildly different results.
             | Thinking especially here of architectural stuff, where it
             | does a great job laying out furniture in a room, but when I
             | ask it to keep everything the same but change the colour of
             | one piece, it goes completely off the rails.
        
               | spike021 wrote:
               | I would assume it depends on how it generates the images.
               | 
               | I've used Claude to generate fairly simple icons and
               | launch images for an iOS game and I make sure to have it
               | start with SVG files since those can be defined as code
               | first. This way it's easier to iterate on specific
               | elements of the image (certain shapes need to be moved to
               | a different position, color needs to be changed, text
               | needs an update, etc.).
               | 
               | FWIW not sure how Nano Banana Pro works though.
        
               | fzysingularity wrote:
               | Claude does image generation in surprising ways - we did
               | a small evaluation [1] of different frontier models for
               | image generation and understanding, and Claude is by far
               | the most surprising in results.
               | 
               | [1] https://chat.vlm.run/showdown
               | 
               | [2] https://news.ycombinator.com/item?id=45996392
        
               | simonw wrote:
               | Nano Banana is really good at iterating on images, as
               | shown by the pancake skull example I borrowed from Max
               | Woolf: https://simonwillison.net/2025/Nov/20/nano-banana-
               | pro/#tryin...
               | 
               | I've tried iterating on slides with test on them a bit
               | and it seems to be competent at that too.
        
               | vunderba wrote:
               | You can use targeted feedback - but it's on the user to
               | verify whether the edits were completely localized. In my
               | experience NB mostly tends to make relatively surgical
               | edits but if you're not careful it'll introduce other
               | minute changes.
               | 
               | And that point you can either start over or just
               | feather/mask with the original in any Photoshop type
               | application.
        
           | gpmcadam wrote:
           | None of it was accurate.
           | 
           | But boy was it beautiful.
        
             | Kiro wrote:
             | Funny thing to say considering the author of Datasette
             | himself says it's accurate.
        
           | simonw wrote:
           | Almost entirely. I called out the one discrepancy in my post:
           | 
           | > "Data Ingestion (Read-Only)" is a bit off.
        
         | fudged71 wrote:
         | I've been really excited for you infographic generation.
         | Previous models from Google and openAI had very low
         | detail/resolution for these things.
         | 
         | I've found in general that the first generation may not be
         | accurate but a few rolls of the dice and you should have enough
         | to pick a style and format that works, which you can iterate
         | on.
        
         | skybrian wrote:
         | It didn't do so well at finding middle C on a piano keyboard:
         | 
         | https://gemini.google.com/share/c9af8de05628
         | 
         | I did manage to get one image of a piano keyboard where the
         | black keys were correct, but not consistently.
        
           | vunderba wrote:
           | I've tried similar stuff such as: _" Show a piano with an
           | outstretched hand playing a Emaj triad on the E, G#, and B
           | keys"._
           | 
           | https://imgur.com/ogPnHcO
           | 
           | Even generating a standard piano with 7 full octaves that are
           | consistent is pretty hard. If you ask it to invert the colors
           | of the naturals and sharps/flats you'll completely break
           | them.
        
             | Snuggly73 wrote:
             | reflection seems slightly wrong as well
        
           | gowld wrote:
           | Fooled me because it was _locally_ correct!
        
         | pseudosavant wrote:
         | It even worked really well at creating an infographic for one
         | of my quirkier projects which doesn't have that much
         | information online (other than its repo).
         | 
         | "An infographic explaining how player.html works (from the
         | player.html project on Github).
         | https://github.com/pseudosavant/player.html"
         | 
         | And then it made one formatted for social: "Change it to be an
         | infographic formatted to fit on Instagram as a 1:1 square
         | image."
        
         | ndkap wrote:
         | Did you check if the SynthID works when you edit the photos
         | with filters like GrayScale?
        
         | nrhrjrjrjtntbt wrote:
         | Game changer for architecture diagrams.
        
           | energy123 wrote:
           | I'm finding it bad at instruction following for architectural
           | specs (physical not software), where you tell it what goes
           | where, and it ignores you and does some average-ish thing
           | it's seen before. It looks visually appealing though.
        
         | JLO64 wrote:
         | This is legitimately game changing a feature in my SaaS where
         | customers can generate event flyers. Up until now I had Nano
         | Banana generate just a decorative border and had the actual
         | text be rendered via Pillow controlled by an LLM. The result
         | worked, but didn't look good.
         | 
         | That said, I wonder if text is only good in small chunks (less
         | than a sentence) or if it can properly render full sentences.
        
           | danielbln wrote:
           | It can render full sentences.
        
         | cubefox wrote:
         | It would be great if Google could make SynthID openly available
         | so OpenAI etc could also implement it. Then websites like
         | Facebook, or even local browsers, could implement an "AI
         | warning".
        
       | sd9 wrote:
       | It's crazy how good these models are at text now. Remember when
       | text was literally impossible? Now the models can diagetically
       | render any text. It's so good now that it seems like a weird blip
       | that it _wasn't_ possible before.
       | 
       | Not to mention all the other stuff.
        
         | psygn89 wrote:
         | I agree, it's improving by leaps. I'm still patiently awaiting
         | for my niche use of creating new icons though, one that can
         | match the existing curvature, weight, spacing, and balance. It
         | seems AI is struggling in the overlap of visuals <-> code, or
         | perhaps there's less business incentive to train on that front.
         | I know the pelican on bicycle svg is getting better, but still
         | really rough looking and hard to modify with prompt versus just
         | spending some time upfront to do it yourself in an editor.
        
         | glemmaPaul wrote:
         | I wonder; do you think these LLMs now rather have text tools,
         | or is this still straight out of the neural network? If its the
         | latter, thats incredibly impressive.
        
       | bilsbie wrote:
       | I've been struggling with infographics. That's my main use case
       | but every tool seems to bungle the text.
        
       | stefl14 wrote:
       | First model I've seen that was consistently compositional, easily
       | handling requests like
       | 
       | "Generate an image of an african elephant painted in the New
       | England flag, doing a backflip in front of the russian federal
       | assembly."
       | 
       | OpenAI made the biggest step change towards compositionality in
       | image generation when they started directly generating image
       | tokens for decoders from foundation llms, and it worked very well
       | (openais images were better in this regard than nano banana 1,
       | but struggled with some OOD images like elephants doing
       | backflips), but banana 2 nails this stuff in a way I haven't seen
       | anywhere else
       | 
       | if video follows the same trends as images in terms of prompt
       | adherence, that will be very valuable... and interesting
        
       | TheAceOfHearts wrote:
       | You can try it out for free on LMArena [0]: New Chat -> Battle
       | dropdown -> Direct Chat -> Click on Generate Image in the chat
       | box -> Click dropdown from hunyuan-image-3.0 -> gemini-3-pro-
       | image-preview (nano-banana-pro).
       | 
       | I've only managed to get a few prompts to go through, if it takes
       | longer than 30 seconds it seems to just time out. Image quality
       | seems to vary wildly; the first image I tried looked really good
       | but then I tried to refresh a few times and it kept getting
       | worse.
       | 
       | [0] lmarena.ai/
        
         | scottlamb wrote:
         | When I do that, I get two (very similar but not identical)
         | responses side-by-side in one image (I guess as if the model is
         | battling itself?). Is that normal for lmarena?
         | 
         | https://imgur.com/a/h0ncCFN
        
         | RobinL wrote:
         | Thanks - this worked for me (some errors, some success).
         | 
         | Last week I was making a birthday card for my son with the old
         | model. The new model is dramatically better - I'm asking for an
         | image in comic book style, prompted with some images of him.
         | 
         | With the previous model, the boy was descriptively similar
         | (e.g. hair colour and style) but looked nothing like him. With
         | this model it's recognisably him.
        
       | Fiveplus wrote:
       | What can nano-banana do that chatGPT made images can't? Or is it
       | only better for image editing from what I can gather from these
       | comments so far. I haven't used it so genuinely curious.
        
         | minimaxir wrote:
         | I made some direct comparisons my Nano Banana post
         | (https://news.ycombinator.com/item?id=45917875) but Nano Banana
         | can handle photorealistic photos with nuanced prompts much
         | better. And there is no yellow filter.
        
           | Fiveplus wrote:
           | Absolutely amazing pose, thanks for sharing!
        
         | sosodev wrote:
         | https://news.ycombinator.com/item?id=45890186
        
           | Fiveplus wrote:
           | Thanks!
        
         | seanw444 wrote:
         | > Nano Banana Pro is the best model for creating images with
         | correctly rendered and legible text directly in the image
        
       | embedding-shape wrote:
       | I tried the same prompt as one of the examples
       | (https://i.imgur.com/iQTPJzz.png), in the two ways they say you
       | can run it, via Google Gemini and Google AI Studio (I suppose
       | they're different somehow?). The prompt was "Create an
       | infographic that shows hot to make elaichi chai" and Google
       | Gemini created a infographic (https://i.imgur.com/aXlRzTR.png),
       | but it was all different from what the example showed. Google AI
       | Studio instead created a interactive website, again with
       | different directions: https://i.imgur.com/OjBKTkJ.png
       | 
       | There is not a single mention about accuracy, risks or anything
       | else in the blogpost, just how awesome the thing is. It's clearly
       | not meant to be reliable just yet, but not making this clear up
       | front. Isn't this almost intentionally misleading people,
       | something that should be illegal?
        
         | nerveband wrote:
         | Whoever said there was a universal recipe for Elaichi Chai? It
         | makes sense that there would be different recipes. If you are
         | more stringent with the prompt and give it the proper context
         | of what you want the steps to be, you'll arrive at that
         | consistency.
        
         | jessegeens wrote:
         | If it were illegal to intentionally mislead people, many
         | magicians would be out of a job :)
        
       | mattmaroon wrote:
       | Nano Banana has been the only model I've really loved. As a small
       | businesses who makes products, it's been a game changer on the
       | marketing side. Now when I've got something new I need to
       | advertise in a hurry, I take a crappy pic and fix it in that.
       | Don't have a perfect model ready yet? That's ok, I can just alter
       | to look exactly like it will.
       | 
       | What used to cost money and involve wait time is now free and
       | instant.
        
       | ashleyn wrote:
       | Does anyone know if this is predicting the entire image at once,
       | or if it's breaking it into constituent steps i.e. "draw text in
       | this font at this location" and then composing it from those
       | "tools"? It would be really interesting if they've solved the
       | garbled text problem within the constraint of predicting the
       | entire image at once.
        
         | johnecheck wrote:
         | I strongly suspect it's the latter, though someone please chime
         | in if I'm wrong.
         | 
         | Even so, this is a real advancement. It's impressive to see
         | existing techniques combined to meaningfully improve on SOTA
         | image generation.
        
         | scoopertrooper wrote:
         | The previous nano banana was using composing tools. It was
         | really obvious by some of the janky outputs it made. Not sure
         | about this one, but presumably they built off it.
        
         | teaearlgraycold wrote:
         | I'm pretty sure, but no expert on the matter, that correct text
         | rendering was solved by feeding in bitmaps of rasterized fonts
         | as supplemental context to the image generation models.
        
         | FergusArgyll wrote:
         | There still is some garbled text sometimes so it can't be the
         | latter (try to get it to generate a map of 48 us states labeled
         | - the ones that are too small to write on and need arrows were
         | garbled (1 attempt))
        
       | jayd16 wrote:
       | I was just playing with the non-pro version of this and it seems
       | to add both a Gemini and Disney watermark. Presumably this was
       | because I referenced beauty and the beast.
       | 
       | Anyone know if this is an hallucination or if they have some kind
       | of deal with content owners to add branding?
        
       | smusamashah wrote:
       | This is what the SynthID signature looks like on Nano Banana
       | images
       | https://www.reddit.com/r/nanobanana/comments/1o1tvbm/nano_ba...
       | 
       | And if it can be seen like that, it should be removeable too.
       | There are more examples in that thread.
        
       | isoprophlex wrote:
       | If only there was a straightforward way to pay google to use
       | this, with a not entirely insane UX...
        
       | standardly wrote:
       | Anyone else think "Nano Banana" is an awful name? For some reason
       | it really annoys me. It looks incredibly fancy, though.
        
       | egypturnash wrote:
       | Everyone who worked on this is a traitor to the human race. Why
       | do we need to make it impossible to make a living as an artist?
       | Who thinks an endless tsunami of garbage "content" churned out by
       | machines dropping the bottom out of all artistic disciplines is a
       | good idea?
        
         | deviation wrote:
         | Capitalism, at work. Wherever there is a cost, there will be
         | attempts made at cost efficiency. Google understands that
         | hiring designers or artists is expensive, and they want to
         | offer a cheaper, more effective alternative so that they can
         | capture the market.
         | 
         | In a coffee shop this morning I saw a lady drawing tulips with
         | a paper and pencil. It was beautiful, and I let her know... But
         | as I walked away I felt sad that I don't feel that when
         | browsing online anymore- because I remember how impressive it
         | used to feel to see an epic render, or an oil painting, etc...
         | I've been turned cynical.
        
         | apt-apt-apt-apt wrote:
         | On the flip side, it can be good for the environment. Instead
         | of spending tons of resources burning a car or doing a bunch of
         | setup to get a shot, we can prompt it using relatively fewer
         | energy resources.
        
         | user34283 wrote:
         | I do. Free art for everyone, and it's great.
        
           | CamperBob2 wrote:
           | B...b...b...but the _gate_! There 's nobody guarding the
           | gate!
        
         | cheema33 wrote:
         | > Everyone who worked on this is a traitor to the human race.
         | 
         | Have we felt this way for all other large scale advances in
         | human history?
        
           | rester324 wrote:
           | That's a question too generic. But yes, I guess? And people
           | get Nobel prizes to point out that said advances have been
           | causing the downfall of empires and nations.
        
         | AstroBen wrote:
         | To try to put a positive spin on it..
         | 
         | It enables smaller teams to put out better quality products
         | 
         | Imagine you're an artist that wants to create a video game but
         | you suck at development. You could leverage AI to get good
         | enough code and have amazing art
         | 
         | On the other side someone who invested their entire skill tree
         | in development can have amazing code and passable art
         | 
         | The more I think about it the more it seems this AI revolution
         | will hurt big companies the most. Most people have no hope of
         | competing with a AAA game studio because they don't have the
         | capital. Maybe this levels the playing field?
        
           | egypturnash wrote:
           | I _am_ an artist. I have friends who like to code. I could
           | leverage talking to my friends and saying  "hey anyone wanna
           | fool around and make some games". I could get Unreal and one
           | of the 800 game templates available on their store for prices
           | ranging from $0 to a few hundred bucks and start plopping my
           | art in there and fiddling around. There's a bazillion art
           | assets on there for the programmer with no art skills, too.
           | And there's a section on the Unreal forums for people to say
           | "hey I have this set of skills, who wants to make a game with
           | me?".
           | 
           | Or we could all just generate a bunch of completely
           | unmaintanable code or some uncopyrightable art, sounds great.
        
             | AstroBen wrote:
             | Your unpaid friend or a Unity game template is unlikely to
             | be enough to compete with medium+ scope games
             | 
             | Can't forget animation or sound either. Someone needs to
             | work on the actual game design too! Whose job is it for the
             | marketing? Hope someone has video editing skills to show it
             | off well. Who even did the market research at the start?
             | 
             | It's.. a lot. So normally you have to reallllyyy simplify
             | and constrain what you're capable of
             | 
             | AI might change that. Not now of course but one day?
        
           | t-writescode wrote:
           | Undertale Exists.
           | 
           | Baba is You Exists.
           | 
           | Nethack Exists (and similar games).
           | 
           | Dwarf Fortress Exists.
           | 
           | Mountains of Indie Horror games made of Unity Store assets
           | exist.
           | 
           | Coal, LLC exists.
           | 
           | Cookie Clicker Exists.
           | 
           | Balatro Exists.
        
             | AstroBen wrote:
             | And Stardew Valley... which took 4-5 years. Vampire
             | Survivors. I'm aware of these. They all have one thing in
             | common: limited in scope or massively simplified in some
             | area
             | 
             | Dwarf Fortress still has basically no animations after
             | close to 20 years in development, and spent most of its
             | life in ascii for good reason. The final art pack I'm
             | fairly sure was contracted out
             | 
             | That's my point. Larger scoped projects are gated by
             | capital or bigger founding teams. Maybe they don't have to
             | be. Maybe in the future 3 friends could build a viable
             | Overwatch competitor
        
               | t-writescode wrote:
               | PUBG?
        
               | AstroBen wrote:
               | ?
        
               | t-writescode wrote:
               | Playerunknown's Battlegrounds, although ChatGPT said that
               | was 30 people, which surprises me.
               | 
               | I'll defer to .... original Counter Strike and the
               | original Firearms mod
        
               | AstroBen wrote:
               | I mean, the point I'm trying to make is that AI could be
               | used to expand the scope of what can be achieved. I never
               | said that it's impossible to develop _any_ game without
               | it
        
         | asadm wrote:
         | upskill or gtfo.
        
           | maximinus_thrax wrote:
           | Is this a personal opinion, your opinion as a Google employee
           | or Google's position on the matter?
           | 
           | Have you cleared this statement with comms/PR?
           | 
           | Do you want OP to 'get the fuck out' if they don't upskill or
           | is this a general statement related to artists?
        
         | t-writescode wrote:
         | I want to piggyback off what you've said, but for *additional*
         | problems with this:
         | 
         | To me, this is terrifying. Major use-cases presented on this
         | page:                 * photo editing / post-processing       *
         | branding       * infographics
         | 
         | Photo editing and post-processing seems like the "least
         | harmful" version of this. Doing moderate color-space tweaks or
         | image extensions based on the images themselves seems like a
         | "relatively not-evil" activity and will likely make a lot of
         | artwork a bit nicer. The same technology will probably also be
         | able to be used to upscale photos taken on Pixel cameras, which
         | might be nice. MOSTLY. It'll also call into question any super-
         | duper-upscaled visuals when used as evidence for court and the
         | "accuracy of photos as facts" - see the fake stuff Samsung did
         | with the moon; but far, far more ubiquitous.
         | 
         | However, Branding and Infographics are where I have concerns.
         | 
         | Branding - it's AI art, so it can't be copyrighted, or are we
         | just going to forget that?
         | 
         | --
         | 
         | Infographics, though. We know that AI frequently hallucinates -
         | and even hallucinates citations themselves, so ... how can we
         | generated infographics if they're magicking into existence the
         | stats used in the infographics themselves?!
        
           | CamperBob2 wrote:
           | Copyright is done, for better or worse. Up until very
           | recently, many if not most HN'ers would have considered that
           | a GOOD thing.
        
         | CamperBob2 wrote:
         | (Shrug) If you expect to coast through an uneventful,
         | unchallenging career, neither art nor technology are going to
         | be great options for you. Learn to mine coal or something, I
         | guess.
         | 
         | Or... put your hands on the most amazing art tools since the
         | Renaissance and go make something awesome.
        
           | bombdailer wrote:
           | No one using these tools will produce anything even a tenth
           | as impressive as what was born out of the Renaissance, since
           | their efforts were born of mastery, understanding, patience,
           | a keen eye, and a love of nature and life. One who outsources
           | their creativity and thinking to a machine will produce
           | meaningless 'art' as empty as the shrinking contours of their
           | mind as it withers away from non-use. Our world is not want
           | for more quantity as we already drown in excess, and the
           | quality and meaning inherit in masterful works of art born
           | out of ones own hands will one day once again find their way
           | to the center of our consciousness, as the world learns again
           | that the value of art lies not solely in its appearances, but
           | in its revelation of the human soul by means of Beauty, by
           | which a human endeavors by great effort and skill to impart
           | some aspect of their fleeting glimpse of the divine and
           | sacred nature of being, by which our being here now as people
           | of this earth and time consists of, which binds us all, now
           | and through our history and our future.
        
             | CamperBob2 wrote:
             | OK
        
       | CSMastermind wrote:
       | There's some really impressive things about this (the speed, the
       | lack of typical AI image gen artifacts) but it also seems less
       | creative than other models I've tried?
       | 
       | "mountain dew themed pokemon" is the first search prompt I always
       | try with new image models and Nano Banna Pro just gave me a green
       | pikachu.
       | 
       | Other models do a much better job of creating something new.
        
         | vunderba wrote:
         | IMHO I'd rather them focus on strong literal prompt adherence
         | so that more detailed prompts produce more accurate results.
         | 
         | That way you can stick your choice of any number of LLM
         | preprocessors in front of a generic prompt like "mountain dew
         | themed pokemon" and push the responsibility of creating a more
         | detailed prompt upstream.
         | 
         | https://imgur.com/a/s5zfxS5
         | 
         |  _Note: I 'm not particularly impressed with either of the
         | results - this is more a demonstration._
        
       | Bjorkbat wrote:
       | Something I find weird about AI image generation models is that
       | even though they no longer produce weird "artifacts" that give
       | away that the fact that it was AI generated, you can still
       | recognize that it's AI due to stylistic choices.
       | 
       | Not all examples they gave were like this. The example they gave
       | of the word "Typography" would have fooled me as human-made. The
       | infographics stood out though. I would have immediately noticed
       | that the String of Turtles infographic was AI generated because
       | of the stylistic choices. Same for the guide on how to make chai.
       | I would be "suspicious" of the example they gave of the weather
       | forecast but wouldn't immediately flag at as AI generated.
       | 
       | Similar note, earlier I was able to tell if something was AI
       | generated right off the bat by noticing that it had a "Deviant
       | Art" quality to it. My immediate guess is that certain sources of
       | training data are over-represented.
        
         | snek_case wrote:
         | I think it's because they're all trained on the same data
         | (everything they could possibly scrape from the open web). The
         | models tend to learn some kind of distribution of what is most
         | likely for a given prompt. It tends to produce things that are
         | very average looking, very "likely", but as a result also
         | predictable and unoriginal.
         | 
         | If you want something that looks original, you have to come up
         | with a more original prompt. Or we have to find a way to train
         | these models to sample things that are less likely from their
         | distribution? Find a way to mathematically describe what it
         | means to be original.
        
           | Terretta wrote:
           | If you ever had a pinterest account and a deviant art
           | account, all becomes clear.
        
           | dkural wrote:
           | Do you know of some tools with a parameter that asks it to be
           | "weird" and increase diversity of outputs?
        
             | Yokohiii wrote:
             | If you want a chance for real creativity, flexibility and
             | you have a decent gpu go local. Check out comfyui, download
             | models and play around. The mainstream services have zero
             | knobs to play around with, local is infinite.
        
           | Yokohiii wrote:
           | An more original prompt wont fix things. Modern base models
           | want to eliminate everything that puts their creators at
           | risk, which is anything that is clearly made by someone else,
           | more or less accurately reproducible. If you avoid decent
           | representation of any artist style, or anything/anyone that
           | is likely to go to court, you wont get the chance of an
           | creative synthesis either.
        
         | horhay wrote:
         | It still has some artifacts more often than not, they are a lot
         | subtler in nature but they still come out, whether it's
         | texture, proportion, lighting, or perspective. Now some things
         | are easier to fix on second pass edits, some are not. I guess
         | it's why they consider image editing to be the next challenge.
        
         | ralusek wrote:
         | It's a bit odd to say, but another big clue identifying
         | something as AI-generated is that it simply looks "too good"
         | for what it is being used for. If I see a little info graphic
         | demonstrating something relatively mundane, and it has nice 3D
         | rendered characters or graphical elements, at this point it's
         | basically guaranteed to be AI, because you just sort of
         | intuitively know when something would've justified the human
         | labor necessary to produce that.
        
           | raincole wrote:
           | It's not odd to say. It was one of the first telling signs to
           | identify AI artists[0] on Twitter: overly detailed
           | backgrounds.
           | 
           | Of course now a lot of them have learned the lesson and it's
           | much harder to tell.
           | 
           | [0]: I know, I know...
        
           | Bjorkbat wrote:
           | Funny enough that had crossed my mind with the woodchuck
           | example, because at a glance I can't see any weird artifacts,
           | but I felt confident I could tell it was AI generated
           | immediately if I saw it in the wild, and I couldn't really
           | explain why. My immediate guess was "well, who the hell would
           | actually bother to make something like this?"
        
         | mlsu wrote:
         | We are just very sharp when it comes to seeing small
         | differences in images.
         | 
         | I'm reminded of when the air force decided to create a pilot
         | seat that worked for everyone. They took the average body
         | dimensions of all their recruits and designed a seat to fit the
         | average. It turned out, the seat fit none of their recruits.
         | [1]
         | 
         | I think AI image generation is a lot like this. When you train
         | on all images, you get to this weird sort of average space. AI
         | images look like that, and we recognize it immediately. You can
         | prompt or fine tune image models to get away from this, though
         | -- the features are there it's a matter of getting them out.
         | Lots of people trying stuff like this: https://www.reddit.com/r
         | /StableDiffusion/comments/1euqwhr/re..., the results are nearly
         | impossible to distinguish from real images.
         | 
         | [1] https://www.thestar.com/news/insight/when-u-s-air-force-
         | disc...
        
           | bobbylarrybobby wrote:
           | What determines which "average" AI models latch onto? At a
           | pixel level, the average of every image is a grayish
           | rectangle; that's obviously not what we mean and AI does not
           | produce that. At a slightly higher level, the average of
           | every image is the average of every subject every
           | photographed or drawn (human, tree, house, plate of food,
           | ...) in concept space; but AI still doesn't generate a human
           | with branches or a house with spaghetti on it. At a still
           | higher level there are things we recognize as sensible
           | scenes, e.g., barista pouring a cup of coffee, anime scene of
           | a guy fighting a robot, watercolor of a boat on a lake, which
           | AI still does not (by default) average into, say, an equal
           | parts watercolor/anime/photorealistic image of a barista
           | fighting a robot on a boat while pouring a cup of coffee.
           | 
           | But it is undeniable that AI images do have an "average" feel
           | to them. What causes this? What is the space over which AI is
           | taking an average to produce its output? One possible answer
           | is that a finite model size means that the model can only
           | explore image space with a limited resolution, and as models
           | get bigger/better they can average over a smaller and smaller
           | portion of this space, but it is always limited.
           | 
           | But that raises the question of why models don't just
           | naturally land on a point in image space. Is this just a
           | limitation of training, which punishes big failures more
           | strongly than it rewards perfection? Or is there something
           | else at play here that's preventing models from landing
           | directly on a "real" image?
        
             | minimaxir wrote:
             | > At a pixel level, the average of every image is a grayish
             | rectangle; that's obviously not what we mean and AI does
             | not produce that.
             | 
             | That isn't correct since images in the real world aren't
             | uniformly distributed from [0, 255] color-wise. Take, for
             | example, the famous ImageNet normalization magic numbers:
             | normalize = transforms.Normalize(mean=[0.485, 0.456,
             | 0.406],
             | std=[0.229, 0.224, 0.225])
             | 
             | If it were actually uniformly distributed, the mean for
             | each channel would be 0.5 and the standard deviation would
             | be 0.289. Also due to z-normalization, the "image" most
             | image models see is not how humans typically see images.
        
             | azeirah wrote:
             | Isn't the space you're talking about the input images that
             | are close to the textual prompt?
             | 
             | These models are trained on image+text pairs. So if you
             | prompt something like "an apple" you get a conceptual
             | average of all images containing apples. Depending on your
             | dataset, it's likely going to be a photograph of an apple
             | in the center.
        
             | red75prime wrote:
             | The model "averages" in the latent space. That is in the
             | space of packed image representations. I put "averages"
             | into scare quotes, because I think it might be due to legal
             | reasons. The model training might be organized in such a
             | way as to push its default style away from styles of
             | prominent artists. I might be wrong though.
        
           | cyanf wrote:
           | Tragedy of the aggregate.
        
         | antirez wrote:
         | The problem is how they are fine tuned with human feedbacks
         | that are not opinionated, so they produce some "average taste"
         | that is very recognizable. Early models didn't have this issue,
         | it's a paradox... Lower quality / broken images but often more
         | interesting. Krea & Black Forest did a blog post about that
         | some time ago.
        
           | pixl97 wrote:
           | I wonder if we'll get to the point where we train different
           | personalities into an image model that we can bring out in
           | the prompt and these personalities have distinct art/picture
           | styles they produce.
        
           | Bjorkbat wrote:
           | Oh yeah, funny enough even though I'm a bit of an AI art
           | hater I actually thought very early Midjourney looked good
           | because of all had an impressionistic, dreamy quality.
        
         | Yokohiii wrote:
         | I don't think it's solely an data issue. Flux models for
         | example are quite stylized, very notable with photorealism. But
         | I think it was an deliberate choice to to have outputs that are
         | absent of likeness and distinct style. I think it's an side
         | effect that it washes away fine details and creates outputs
         | feel artificial. The problem is that closed models can't be
         | fixed easily, while models like flux or even older
         | architectures can add back details and style with fine tuning
         | and LoRas.
        
         | delifue wrote:
         | Maybe the AI feeling is illusion because you already know it's
         | AI-generated, just confirmation bias. Like wine tastes better
         | after knowing it's expensive. In real world AI-generated images
         | have passed Turing test. Only by double blind test do you can
         | be really sure.
        
         | quitit wrote:
         | We can also pick up hints on discordant production value. This
         | is quite noticeable on websites such as
         | Amazon/Alibaba/Etsy/Ebay/etc where there's a lot of scam
         | listings that use AI images for cheap or basic items.
         | 
         | So even though the image shown doesn't present obvious flaws,
         | the fact that the image is high quality is the tell-tale sign
         | of being AI generated.
         | 
         | This also isn't something that can be easily fixed - even if we
         | produce convincing low production value imagery using AI, then
         | the scam listing doesn't achieve its goal because it looks like
         | junky crap.
        
       | ceroxylon wrote:
       | Google has been stomping around like Godzilla this week, and this
       | is the first time I decided to link my card to their AI studio.
       | 
       | I had seen people saying that they gave up and went to another
       | platform because it was "impossible to pay". I thought this was
       | strange, but after trying to get a working API key for the past
       | half hour, I see what they mean.
       | 
       | Everything is set up, I see a message that says "You're using
       | Paid API key [NanoBanano] as part of [NanoBanano]. All requests
       | sent in this session will be charged." Go to prompt, and I get a
       | "permission denied" error.
       | 
       | There is no point in having impressive models if you make it a
       | chore for me to -give you my money-
        
         | wheelerwj wrote:
         | 100% this. I am using the pro/max plans on both claude and
         | openai. Would love to experiment with gemini but paying is next
         | to impossible. Why do i need the risk of a full blown gcp
         | project just to test gemini. No thx.
        
         | vunderba wrote:
         | If it's just the API you're interested in, Fal.ai has put Nano-
         | Banana-Pro up for both generative and editing. A great deal
         | less annoying to sign up for them since they're a pretty
         | generalized provider of lots of AI related models.
         | 
         | https://fal.ai/models/fal-ai/nano-banana-pro
        
           | LaurensBER wrote:
           | In general a better option, in the early days of AI video I
           | tried to generate a video of a golden retriever using
           | Google's AI Studio. It generated 4 in the highest quality and
           | charged me 36 bucks. Not a crazy amount but definitely an
           | unwelcome suprise.
           | 
           | Fal.ai is pay as you go and has the cost right upfront.
        
             | vunderba wrote:
             | 100% agreed. Same reason that I use the OpenRouter API for
             | most LLM usage.
        
             | minimaxir wrote:
             | Vertex AI Studio setting a default of 4 videos where each
             | video is several dollars to generate is a very funny
             | footgun.
        
           | echelon wrote:
           | There's the solution right there. Google is still growing its
           | AI "sea legs". They've turned the ship around on a dime and
           | things are still a little janky. Truly a "startup mode"
           | pivot.
           | 
           | While we're on this subject of "Google has been stomping
           | around like Godzilla", this is a nice place to state that I
           | think the tide of AI is turning and the new battle lines are
           | starting to appear. Google looks like it's going to lay waste
           | to OpenAI and Anthropic and claim most of the market for
           | itself. These companies do not have the cash flow and will
           | have to train and build their asses off to keep up with where
           | Google already is.
           | 
           | gpt-image-1 is 1/1000th of Nano Banana Pro and takes 80
           | seconds to generate outputs.
           | 
           | Two years ago Google looked weak. Now I really want to move a
           | lot of my investments over to Google stock.
           | 
           | How are we feeling about Google putting everyone out of work
           | and owning the future? It's starting to feel that way to me.
           | 
           | (FWIW, I really don't like how much power this one company
           | has and how much of a monopoly it already was and is
           | becoming.)
        
             | remich wrote:
             | Valid questions, but I'd say that it's hard to know what
             | the future holds when we get models that push the state of
             | the art every few months. Claude sonnet 3.7 was released in
             | _February_ of this year. At the rate of change we 're
             | going, I wouldn't be surprised if we end up with Sonnet 5
             | by March 2026.
             | 
             | As others have noted, Google's got a ways to go in making
             | it easier to actually use their models, and though their
             | recent releases have been impressive, it's not clear to me
             | that the AI product category will remain free from the bad,
             | old fiefdom culture that has doomed so many of their
             | products over the last decade.
        
             | toddmorey wrote:
             | We can't help but overreact to every new adjustment on the
             | leader boards. I don't think we're quite used to products
             | in other industries gaining and losing advantage so
             | quickly.
        
             | ants_everywhere wrote:
             | This is also my take on the market, although I also thought
             | it looked like they were going to win 2 years ago too.
             | 
             | > How are we feeling about Google putting everyone out of
             | work and owning the future? It's starting to feel that way
             | to me.
             | 
             | Not great, but if one company or nation is going to come
             | out on top in AI then every other realistic alternative at
             | the moment is worse than Google.
             | 
             | OpenAI, Microsoft, Facebook/Meta, and X all have worse
             | track records on ethics. Similarly for Russia, China, or
             | the OPEC nations. Several of the European democracies would
             | be reasonable stewards, but realistically they didn't have
             | the capital to become dominant in AI by 2025 even if they
             | had started immediately.
        
               | rl3 wrote:
               | > _OpenAI, Microsoft, Facebook /Meta, and X all have
               | worse track records on ethics._
               | 
               | I'd argue Google is evil as OpenAI (at least lately), but
               | I otherwise generally agree with your sentiment.
               | 
               | If Google does lay waste to its competitors, then I hope
               | said competitors open source their frontier models before
               | completely sinking.
        
           | SamBam wrote:
           | Is there a model on Fal.ai that would make it easy to sharpen
           | blurry video footage? I have found some websites, but
           | apparently they are mostly scammy.
        
             | brk wrote:
             | FYI that is an extremely challenging thing to do right.
             | Especially if you care about accuracy and evidentiary
             | detail. Not sure this is something that the current crop of
             | AI tools are really tuned to do properly.
        
               | mh- wrote:
               | This is a good point. Some of the tools have a "creative
               | mode" or "creativity" knob that hopefully drives this
               | point home. But the simpler ones don't, and even with
               | that setting dialed back it still has the same
               | fundamental limitations/risks.
        
             | vunderba wrote:
             | Unfortunately, this is a fairly difficult task. In my
             | experience, even SOTA models like Nano Banana usually make
             | little to no meaningful improvement to the image when given
             | this kind of request.
             | 
             | You might be better off using a dedicated upscaler instead,
             | since many of them naturally produce sharper images when
             | adding details back in - especially some of the GAN-based
             | ones.
             | 
             | If you're looking for a more hands-off approach, it looks
             | like Fal.ai provides access to the Topaz upscalers:
             | 
             | https://fal.ai/models/fal-ai/topaz/upscale/image
        
               | mh- wrote:
               | Seconding the Topaz recommendation. Although be aware
               | that is the Image upscaler model, and the parent
               | commenter asked about video.
               | 
               | Here's the Fal-hosted video endpoint:
               | https://fal.ai/models/fal-ai/topaz/upscale/video
               | 
               | They also offer (multiple; confusing product lineup!)
               | interactive apps for upscaling video on their own website
               | - Topaz Video and Astra. And maybe more, who knows.
               | 
               | I have access to the interactive apps, and there are a
               | _lot_ of knobs that aren 't exposed in the Fal API.
               | 
               | edit: lol I found a _third_ offering on the Topaz site
               | for this,  "Video upscale" within the Express app. I have
               | no idea which is the best, despite apparently having a
               | subscription to all of them.
        
             | benlivengood wrote:
             | You want a deconvolution pipeline like
             | https://bartwronski.com/2022/05/26/removing-blur-from-
             | images...
             | 
             | Or more likely https://www.cse.cuhk.edu.hk/~leojia/projects
             | /motion_deblurri... for video
        
             | k12sosse wrote:
             | I'm dimestore cheap, I'd be exploding to frames and
             | sharpening and reassembling with a ffmpeg>irfanview process
             | Lol. It would be awfully expensive to do it with an AI
             | model and the results would be expensive. Would a
             | photo/video editing suite do it? Google photos with a pro
             | script, or Adobe premiere elements, or would you be able to
             | do it in yourself in DaVinci resolve? Or are you talking
             | hundreds of hours of video?
        
         | bonoboTP wrote:
         | You can use it also in Gemini.
        
           | ceroxylon wrote:
           | It wasn't there when I first went to Gemini after the
           | announcement, but upon revisiting it gave me the prompt to
           | try Nano Banana Pro. It failed at my niche (rare palm trees).
           | 
           | Incredible technology, don't get me wrong, but still shocked
           | at the cumbersome payment interface and annoyed that enabling
           | Drive is the only way to save.
        
             | bonoboTP wrote:
             | > at the cumbersome payment interface and annoyed that
             | enabling Drive is the only way to save.
             | 
             | For the general audience, Gemini is the intended product,
             | API and AI studio is for advanced users. Gemini is very
             | easy to pay for. In Gemini, you can save all images as a
             | regular browser download by clicking the top right of the
             | image where it says "Download full size".
        
           | kashnote wrote:
           | I hate that they kinda try to hide the model version. Like if
           | you click the dropdown in the chat box, you can see that
           | "Thinking" means 3 Pro. When you select the "Create images"
           | tool, it doesn't tell you it's using Nano Banana Pro until it
           | actually starts generating the image.
           | 
           | Tell me the model it's using. It's as if Google is trying to
           | unburden me with the knowledge of what model does what but
           | it's just making things more confusing.
           | 
           | Oh, and setting up AI Studio is a mess. First I have to
           | create a project. Then an API key. Then I have to link the
           | API key to the project. Then I have to link the project to
           | the chat session... Come on, Google.
        
         | eboynyc32 wrote:
         | Yeah I was confused. I guess I'll stick with nano plum for now.
        
         | kavenkanum wrote:
         | Oh my, you should have tried to integrate with Google Prism.
         | That was a madness! Nano Banana was just a little tricky to set
         | up in comparison!
        
         | andybak wrote:
         | It's amazing that the "hard problems" are turning out to be
         | "not creating a completely broken user experience".
         | 
         | Is that going to need AGI? Or maybe it will always be out of
         | reach of our silicon overlords and require human input.
        
         | ProfessorZoom wrote:
         | I had to write a post request to try it when it launched
        
         | logankilpatrick wrote:
         | First off, apologies for the bad first impression, the team is
         | pushing super hard to make sure it is easy to access these
         | models.
         | 
         | - On permission issue, not sure I follow the flow that got you
         | there, pls email me more details if you are able too and happy
         | to debug: Lkilpatrick@google.com
         | 
         | - On overall friction for billing: we are working on a new
         | billing experience built right into AI Studio that will make it
         | super easy to add a CC and go build. This will also come along
         | with things like hard billing caps and such. The expected ETA
         | for global rollout is January!
        
           | xmprt wrote:
           | Please make sure that the new billing experience has support
           | for billing limits and prepaid balance (to avoid unexpected
           | charges)!
        
             | sandworm101 wrote:
             | Lol. Since the GirlsGoneWild people pioneered the concept
             | of automatically-recurring subscriptions, unexpected
             | charges and difficult-to-cancel billing is the game. The
             | best customer is always the one that pays but never uses
             | the service ... and ideally has forgotten or lost access to
             | the email address they used when signing up.
        
               | mrandish wrote:
               | > or lost access to the email address they used when
               | signing up.
               | 
               | Since Gmail controls access to tens of millions of
               | people's email, I'm seeing potential for some cross-team
               | synergy here!
        
               | _zoltan_ wrote:
               | tens of millions? I think you're severely underestimating
               | it.
        
           | brandon272 wrote:
           | Just a note that your HN bio says "Developer Relations
           | @OpenAI"
        
             | roflyear wrote:
             | Pretty funny! I wonder how much of a premium Google is
             | paying.
        
             | osn9363739 wrote:
             | I was interested. I does look like he just needs to update
             | that. His personal blog says google, and ex-openAI. But I
             | do feel like I have my tin foil on every time I come to HN
             | now.
        
             | Zenst wrote:
             | Sure it will get updated to same as Linkedin - Helping
             | developers build with AI at Google DeepMind.
             | 
             | Imagine many on here have out of date bio's and best part -
             | it don't matter, but sure can make some funnies at times.
        
               | jvolkman wrote:
               | Just search the r/bard or r/geminiai subreddits for
               | Logan. He's very famously a Google employee these days.
        
           | everdev wrote:
           | Maybe the team should push hard before releasing the product
           | instead of after to make it work.
        
             | harles wrote:
             | That's a pretty uncharitable take. Given the scale of their
             | recent launches and amount of compute to make them work, it
             | seems incredibly smooth. Edge cases always arise, and all
             | the company/teams can really do is be responsive - which is
             | exactly why I see happening.
        
               | recursive wrote:
               | Why should the scale of their recent launches be a given?
               | Who is requiring this release schedule?
        
               | rishabhaiover wrote:
               | the market
        
               | recursive wrote:
               | If it's a strategic decision, then its impacts should be
               | weighed in full. Not just the positives.
        
               | windexh8er wrote:
               | We're talking about Google right? You think they need a
               | level of charity for a launch? I've read it all at this
               | point.
        
               | tracker1 wrote:
               | A company with a literal embedded payment processor,
               | including subscription services for half of all mobile
               | users can't manage to take payments for their own public
               | facing services seems like a huge fucking failure to me.
               | 
               | Especially for software developer and tech influencer
               | focused markets.
        
               | lazide wrote:
               | It's a sign that getting the product out took priority
               | over getting paid for it.
               | 
               | Take that how you will.
        
               | tracker1 wrote:
               | Considering the product itself seems to be excessively
               | limited without actually getting paid for it, and the
               | paid tier itself having so many onboarding issues, as a
               | critical usage path, it's pretty bad.
               | 
               | This is in a $3.6 Trillion company, for a product they're
               | spending billions a quarter to develop, with specialized
               | employees making mid 6-figure to 7-figure salaries and
               | bonuses... you'd think _somebody_ has the right
               | connections into the departments that typically handle
               | the payment systems.
               | 
               | My expectations shoot up dramatically for organizations
               | that have all the funding they need to create something
               | "insanely great" in terms of user experience the further
               | they fall short... I don't know who the head of this
               | group/project/department/product is... but someone failed
               | at their job, and got payed excessively for this poor
               | execution.
        
             | lxgr wrote:
             | Imagining the counterfactual ("typical, the most polished
             | part of this service is the payment screen!"), it seems
             | hard to win here.
        
               | onion2k wrote:
               | No one should even notice the payment flow. This isn't
               | Stripe where the polish on the payment experience is a
               | selling point for the service. At Google, paying for
               | something should be a boring but quick process that works
               | and then gets out of the way.
               | 
               | It doesn't need to be good. It just need to be _not
               | broken_.
        
             | asah wrote:
             | But then we'd complain about Google being a slow moving
             | dinosaur.
             | 
             | "Move fast and break things" cuts both ways !
             | 
             | (ex-Google tech lead, who took down the Google.com
             | homepage... twice!)
        
               | bayarearefugee wrote:
               | Its not a new problem though, and its not just billing.
               | The UI across Gemini just generally sucks (across AI
               | Studio and the chat interfaces) and there's lots of
               | annoying failure cases where Gemini will just timeout and
               | stop working entirely midrequest.
               | 
               | Been like this for quite a while, well before Gemini 3.
               | 
               | So far I continue to put up with it because I find the
               | model to be the best commercial option for my usage, but
               | its amazing how bad modern Google is at just basic web
               | app UX and infrastructure when they were the gold
               | standard for such for like, arguably decades prior.
        
               | risyachka wrote:
               | We are talking here about the most basic things- nothing
               | AI related. Basic billing. The fact that it is not
               | working says a lot about the future of the product and
               | company culture in general (obviously they are not
               | product-oriented)
        
               | thehappypm wrote:
               | There's nothing basic about billing.
        
               | adrianN wrote:
               | It is basic in the sense that it is difficult to run a
               | business where billing doesn't work. It's not basic in
               | the ,,easy" sense.
        
               | risyachka wrote:
               | I mean this problem has been solved. Nothing new to it.
               | You just take a few weeks and implement it properly. No
               | surprises will come up.
        
               | atonse wrote:
               | Even though my post complaining about google's billing
               | and incoherent mess got so many upvotes, I'll be the
               | first to say that there is nothing basic about "give me
               | money".
               | 
               | Apart from the fact that what happens to the money when
               | it gets to google (putting it in the right accounts, in
               | the right business, categorizing it, etc), it changes
               | depending on who you're ASKING for money.
               | 
               | 1. Getting money from an individual is easy. Here's a
               | credit card page.
               | 
               | 2. Getting money from a small business is slightly more
               | complicated. You may already have an existing
               | subscription (google workspaces), just attach to it.
               | 
               | 3. As your customers get bigger, it gets more squishy.
               | Then you have enterprise agreements, where it becomes a
               | whole big mess. There are special prices, volume
               | discounts, all that stuff. And then invoice billing.
               | 
               | The point is that yes, we all agree that getting someone
               | to plop down a credit card is easy. Which is why
               | Anthropic and OpenAI (who didn't have 20 years of
               | enterprise billing bloat) were able to start with the
               | simplest use case and work their way slowly up.
               | 
               | But I AM sensitive to how hard this is for companies as
               | large and varied as Google or MS. Remember the famous
               | Bill Gates email where even he couldn't figure out how to
               | download something from Microsoft's website.
               | 
               | It's just that they are also LARGE companies, they have
               | the resources to solve these problems, just don't seem to
               | have the strong leadership to bop everyone on the head
               | until they make the billing simple.
               | 
               | And my guess is also that consumers are such a small part
               | of how they're making money (you best believe that these
               | models are probably beautifully integrated into the cloud
               | accounts so you can start paying them from day one).
        
               | 1dom wrote:
               | Given how many paid offerings Google has, and the
               | complexity and nuance to some of those offering (e.g.
               | AdSense) I am pretty surprised that Google don't have a
               | functioning drop in solution for billing across the
               | company.
               | 
               | If they do, it's failing here. The idea of a penny
               | pinching megacorp like Google failing technically even in
               | the penny pinching arena is a surprise to me.
        
               | AJ007 wrote:
               | My first thought was this is the whole thing about
               | managers at Google trying to get employees under other
               | managers fired and their own reports promoted -- but it
               | feels too similar to how fucked up all the account and
               | billing stuff is at Microsoft. This is what happens when
               | you try to "fix" something by layering on more complexity
               | and exceptions.
               | 
               | From past experience, the advertising side of the
               | business was very clear with accounts and billing. GCP
               | was a whole other story. The entire thing was poorly
               | designed, very confusing, a total mess. You really needed
               | some justification to be using it over almost everything
               | else (like some Google service which had to go through
               | GCP.) It's kind of like an anti-sales team where you buy
               | one thing because you have to and know you never want to
               | touch anything from the brand ever again.
        
               | montag wrote:
               | this way is better. Burn in public, burn much faster.
        
           | luke-stanley wrote:
           | I had the same reaction as them many months ago, the Google
           | Cloud and Vertex AI stuff namespacing is a too messy. The
           | different paths people might take to learning and trying to
           | use the good new models needs properly mapping out and fixing
           | so that the UX makes sense and actually works as they expect.
        
           | Workaccount2 wrote:
           | The fact that your team is worrying about billing
           | is...worrying. You guys should just be focused on the product
           | (which I love, thanks!)
           | 
           | Google has serious fragmentation problems, and really it
           | seems like someone else with high rank should be enforcing
           | (and have a team dedicated to) a centralized frictionless
           | billing system for customers to use.
        
           | mantenpanther wrote:
           | The new releases this week baited me into business ultra
           | subscription. Sadly it's totally useless for gemini 3 cli and
           | now also nano banana does not work. Just wow.
        
             | GenerWork wrote:
             | I bought a Pro subscription (or the lowest tier paid plan,
             | whatever it's called), and the fact that I had to fill out
             | a Google Form in order to request access to get Gemini 3
             | CLI is an absolute joke. I'm not even a developer, I'm a UX
             | guy who just likes playing around with seeing how models
             | deal with importing Figma screens and turn them into a
             | working website. Their customer experience is shockingly
             | awful, worse than OpenAI and Anthropic.
        
           | vessenes wrote:
           | Oh man, there is so, so much pain here. Random example - if
           | GOOGLE_GENAI_USE_VERTEXAI=true in your environment, woe
           | betide you if you're trying to use gemini cli with an API
           | key. Error messages don't match up with actual problems,
           | you'll be told to log in using the cli auth for google, then
           | you'll be told your API keys have no access.. It's just a
           | huge mess. I still don't really know if I'm using a vertex
           | API key or a non-vertex one, and I don't want to touch
           | anything since I somehow got things running..
           | 
           | Anyway vai com dios, I know that there's a fundamental level
           | of complexity deploying at google, and deploying globally,
           | but it's just really hard compared to some competitors.
           | Sadly, because the gemini series is excellent!
        
           | mattchew wrote:
           | I had pretty much written off ever my credit card to Google,
           | but a better billing experience and hard billing caps might
           | change that.
        
           | ukuina wrote:
           | Congrats on the move to Google!
           | 
           | Please allow me to rant to someone who can actually do
           | something about this.
           | 
           | Vertex AI has been a nightmare to simply sign up, link a
           | credit card, and start using Claude Sonnet (now available on
           | Vertex AI).
           | 
           | The sheer number of steps required for this (failed) user
           | journey is dizzying:
           | 
           | * AI Studio, get API key
           | 
           | * AI Studio, link payment method: Auto-creates GCP property,
           | which is nice
           | 
           | * Punts to GCP to actually create the payment method and link
           | to GCP property
           | 
           | * Try to use API key in Claude Code; need to find model name
           | 
           | * Look around to find actual model name, discover it is only
           | deployed on some regions, thankfully, the property was
           | created on the correct region
           | 
           | * Specify the new endpoint and API key, Claude Code throws
           | API permissions errors
           | 
           | * Search around Vertex and find two different places where
           | the model must be provisioned for the account
           | 
           | * Need to fill out a form to get approval to use Claude
           | models on GCP
           | 
           | * Try Claude Code again, fails with API quota errors
           | 
           | * Check Vertex to find out the default quota for Sonnet 4.5
           | is 0 TPM (why is this a reasonable default?)
           | 
           | * Apply for quota increase to 10k tokens/minute (seemingly
           | requires manual review)
           | 
           | * Get rejection email with no reasoning
           | 
           | * Apply for quota increase to 1 token/minute
           | 
           | * Get rejection email with no reasoning
           | 
           | * Give up
           | 
           | Then I went to Anthropic's own site, here's what that user
           | journey looks like:
           | 
           | * console.anthropic.com, get API key
           | 
           | * Link credit card
           | 
           | * Launch Claude Code, specify API key
           | 
           | * Success
           | 
           | I don't think this is even a preferential thing with Claude
           | Code, since the API key is working happily in OpenCode as
           | well.
        
             | leopoldj wrote:
             | You went further with GCP than I did. I was asked
             | repeatedly by support to contact some kind of a Google
             | sales team.
             | 
             | I get the feeling GCP is not good for individuals like I.
             | My friends who work with enterprise cloud have very high
             | opinion about their tech stack.
        
               | TheCraiggers wrote:
               | > I get the feeling GCP is not good for individuals like
               | I.
               | 
               |  _Google_ isn 't good for individuals _at all_. Unless
               | you 've got a few million followers or get lucky on HN,
               | support is literally non-existent. Anyone that builds a
               | business on Google is nuts.
        
             | ashishgupta2209 wrote:
             | Give up i think
        
             | te_chris wrote:
             | Then you actually use it! I dare someone to try and get
             | Gemini live vertex app working.
        
             | belter wrote:
             | I propose a new benchmark for Agentic AI...Be able to sign
             | up for a Google Service...
        
           | Wolf_Larsen wrote:
           | Hi, is your team planning on adding a spending cap? Last I
           | tried, there was no reasonable way to do this. It keeps me
           | away from your platform because runaway inference is a real
           | risk for any app that calls LLMs programatically.
        
           | camkego wrote:
           | Maybe if the sign up process encouraged people to send videos
           | (screen-side and user-side could be useful also), of their
           | sign-up and usage experience, the teams responsible for user
           | experience could make some real progress. I guess the
           | question is, who cares, or who is responsible in the
           | organization?
        
           | shostack wrote:
           | Hopefully the mobile version of AI Studio gets some
           | improvement. There are some pretty awful UI bugs that make it
           | really difficult to use in a mobile first manner.
           | 
           | Though I still managed to vibe code an app using nanobanana.
           | Now I just need to sort API billing with it so I can actually
           | use my app.
        
           | sixhobbits wrote:
           | This is nice that you know about the issue and are working on
           | it. I really appreciate all the new "Get api key" buttons
           | across google ai products that already makes it much easier
           | than setting up a cloud project and getting credentials json
           | files.
           | 
           | But I do think it's a general problem with Google products
           | that the solution is always to build a new one. There are
           | already like 8 ways to use and pay for Google AI and that
           | adds to the complexity of getting set up, so adding a new
           | simpler better option might make that all worse instead of
           | better
        
           | rapind wrote:
           | Dude. Let me give you my money. This isn't rocket science. I
           | don't want anything to do with Google Cloud or Google
           | Workspace or w/e it's called now. Let me just subscribe to
           | Gemini or Nano straight up.
           | 
           | This should be like 2 clicks.
        
           | SweetSoftPillow wrote:
           | Can we get free Nano Banana in AI studio at least in super
           | low resolution? For app building and testing purposes it will
           | be fine and cheap enough for you to make it possible?
        
           | __alexs wrote:
           | When we first started using Gemini for a new product a few
           | months ago you banned our entire GCP account from using at
           | all Gemini in the middle of a demo to our board. Doesn't seem
           | like things have improved all that much on the on boarding
           | front.
        
           | boppo1 wrote:
           | Just make it a VSCode plugin, I don't want to install a new
           | IDE (which is just VSCode anyway) to use your product. It
           | might be better than claude and chatgpt5.1 but not better
           | enough to justify me re-doing all my IDE configs.
        
             | xnx wrote:
             | There is a Gemini VSCode plugin: https://marketplace.visual
             | studio.com/items?itemName=Google.g...
        
           | elevatortrim wrote:
           | Any chance that this reflected to our company account instead
           | of AI Studio?
           | 
           | We want to switch to Gemini from Claude (for agentic coding,
           | chat UI, and any other employee-triggered scenarios) but the
           | pricing model is a complete barrier: How do we pay for a
           | monthly subscription with a capped price?
           | 
           | You launched Antigravity, which looks like an amazing product
           | that could replace Claude Code, but do I know I will be able
           | to pay for it in the same way I pay Claude, which is a simple
           | pay per month subscription?
        
           | neom wrote:
           | The permission thing happens to me too, but very
           | intermittently, usually a couple of hard refreshes of the tab
           | clears it up, sometimes I need to delete the conversation I'd
           | just tried to start and start a new conversation. I can't
           | remember the exact message, sometime like you don't have
           | permission or permission denied. If I had to guess it happens
           | 1 in 5 sessions I load. The API key stuff would be a lot
           | easier if it landed you on the correct page in the GCP portal
           | when it directs you out of AI studio, I think that is the
           | most confusing part of the experience, you end up on what
           | seems like a random GCP billing page with no clear indication
           | as to what it has to do with API keys.
        
           | arendtio wrote:
           | Since 3 days I am trying to get a login to Antigravity and
           | first there was trouble with an api now all I get is 'Your
           | current account is not eligible for Antigravity. Try signing
           | in with another personal Google account'. Even though it is
           | verified and in a supported region...
        
         | abbycurtis33 wrote:
         | Same, I couldn't give them my money.
        
         | kennethologist wrote:
         | Easiest way is to go https://aistudio.google.com/api-keys set
         | up an api key and add your billing to it.
        
           | lostmsu wrote:
           | Does this work on non-personal Google accounts?
        
         | re5i5tor wrote:
         | Ha, I have been steeling myself for a long chat with Claude
         | about "how the F to get AI Studio up and working." With paying
         | being one of the hardest parts.
         | 
         | Without a doubt one essential ingredient will be, "you need a
         | Google Project to do that." Oh, and it will also definitely
         | require me to Manage My Google Account.
        
         | herval wrote:
         | Google APIs in general are hilariously hard to adopt. With any
         | other service on the planet, you go to a platform page, grab an
         | api key and you're good to go.
         | 
         | Want to use Google's gmail, maps, calendar or gemini api?
         | Create a cloud account, create an app, enable the gmail
         | service, create an oauth app, download a json file. Cmon now...
        
           | fx1994 wrote:
           | Yeah, I'm not a dev and not using AI at all but had a need to
           | create oauth keys and some APIs for some project... sometimes
           | it works sometimes it doesnt and it's so complicated...but
           | got it working in the end, thos it stops working after some
           | time, it was like, Google, really?
        
           | creesch wrote:
           | Don't forget the tradition of having to migrate to a new API
           | after a while because this one gets deprecated for "reasons".
           | Not just a newer version, but a complete non backwards
           | compatible new API that also requires its own setup.
           | 
           | To be fair, that might have changed in recent years. But
           | after having to deal with that a few times for a few hobby
           | projects I simply stopped trying. Can't imagine how it is for
           | companies making use of these APIs. I guess it provides work
           | for teams on otherwise stable applications...
        
           | archon810 wrote:
           | When I wake up in cold sweat in the middle of the night, it's
           | because I interacted with yhr Google cloud management UI
           | before I went to sleep.
        
         | nikcub wrote:
         | There is an entire business opportunity in just building better
         | user and developer frontends to Google's AI products. It's so
         | incredibly frustrating.
        
           | shooker435 wrote:
           | lol that's our whole company, Nimstrata
        
         | rustystump wrote:
         | How long till ai studio is in the graveyard i wonder? For real
         | google has some of the most amazing tech but jfc do they suck
         | at making a product.
         | 
         | The only way i use google is via an api key which billing for
         | is arcane to be charitable. How can billions not crack the
         | problem of quickly accepting cash from customers? Surely their
         | ads platform does this?
        
         | tianshuo wrote:
         | Try fal.ai instead, it has all image models.
        
         | mindcrime wrote:
         | So much this. The entire experience around using Google's AI
         | API's is a complete shit-show. I was
         | (stubborn|obstinate|stupid|whatever) enough to keep dicking
         | around until I actually got some stuff working (a few weeks
         | ago) but I still feel dirty from the whole process. And I still
         | don't know what I'm using (Gemini? AI Studio? Vertex? GCP?
         | Other??) or how all of this crap relates.
         | 
         | And FSM forbid I have another time when my debit card number
         | gets compromised and I have to try changing it with Google.
         | That was even MORE painful than just trying to get things
         | working in the first place. WTF am I editing, my GCP account or
         | my Google account? Are those two different things? Yes? No?
         | Sort of? But they're connected, somehow... right? I mean, I
         | disable my card in one place, but find that billing is still
         | trying to go to it anyway. And then I find another place on
         | another Google page that mentions that card, but when I try to
         | disable it I get some opaque error about "can't disable card
         | because card is already in use. Disable card first" or
         | whatever.
         | 
         | I can't even... I mean, shit. It's hard to imagine creating an
         | experience that is that bad even if you were _trying_ to do so.
         | 
         | Let me just say, I won't be recommending Google's AI API's, or
         | GCP, or Vertex, or any of this stuff to anybody, anytime soon.
         | I don't care how good their models are.
         | 
         | At least chatting with Gemini at gemini.google.com works. So
         | far that's about the only thing AI related from Google I've
         | seen that doesn't seem like a complete cluster-f%@k.
        
         | windex wrote:
         | >I decided to link my card to their AI studio.
         | 
         | A lot of us did this in the last 2 days. Gemini3 first and now
         | this.
        
         | nick49488171 wrote:
         | As a small advertiser, it can be surprisingly hard to give them
         | money sometimes. (Trying to advertise an Airbnb.)
        
         | ph4rsikal wrote:
         | You should try contacting customer service.
        
         | stared wrote:
         | I ended up using OpenRouter (which I use anyway).
        
       | bespokedevelopr wrote:
       | It's interesting, I'm trying to use it to create a themed collage
       | by providing a few images and it does that wonderfully, but in
       | the process it is also hallucinating the images I use so I end up
       | with weird distorted faces. Other tools can do this without
       | issue, but something about faces in images this model just has to
       | modify them every time. Ask it to remove background objects and
       | the faces get distorted as well.
       | 
       | Using it for non-people involved images and it's pretty good
       | although I haven't done much and it isn't doing anything
       | 2.5-flash wasn't already doing in the same amount of requests.
        
       | visioninmyblood wrote:
       | If Nano-Banana-pro with Veo 3.1 existed during my PhD, I would've
       | finished a 6-year dissertation in a single year -- it's
       | generating ideas today that used to take me 18 months just to
       | convince people were possible.
        
         | zachwass4856 wrote:
         | The person in the background's face is odd haha
        
         | sealeck wrote:
         | What was your dissertation, and how would Nano-Banana-pro with
         | Veo 3.1 have helped it?
        
           | visioninmyblood wrote:
           | I was working on semantic segmentation. used to spend a long
           | time creating graphics for presenting at conferences. I had a
           | link showing the results but people were saying i was sharing
           | too many links so deleted the link. But these tools with
           | chatgpt can write a paper in a week which i used to take 6
           | months to do
        
       | Aman_Kalwar wrote:
       | Really interesting. Curious what the main design motivation
       | behind this project was and what gaps it fills compared to
       | existing tools?
        
       | sarbajitsaha wrote:
       | Slightly off topic, but how are people creating long videos like
       | 30 second videos that I often see on Instagram? It I try to use
       | Veo to make split videos, it simply cannot maintain the style or
       | weird quirks get into the subsequent videos. Is there anything
       | else that's the best video generation model currently other than
       | Veo?
        
         | spaceman_2020 wrote:
         | Longer videos without cuts are usually made from the first/last
         | frame feature available in Veo 3.1 and other video models like
         | Kling 2.5
        
       | gajus wrote:
       | Will be interesting to see how this model performs in real-world
       | creative tasks. https://creativearena.ai/
        
       | cyrusradfar wrote:
       | I really hope Google reads these HN posts. They've had some big
       | "product" wins but the pricing, packaging, and user system is a
       | severe blocker to growth. If developers can't or won't figure it
       | out -- how the heck are consumers?
        
         | energy123 wrote:
         | And both their consumer apps are slow. You can replicate this
         | yourself. Go to AI Studio, paste in 80K tokens of text, then
         | type something on your keyboard, and see what happens. The
         | Gemini web app is even worse somehow. A horrifically slow and
         | buggy app. Not new problems either, barely any improvement on
         | this over more than 1 year.
        
           | user34283 wrote:
           | No issues here that I remember with the Gemini app on Android
           | recently - half a year ago it was a slideshow with just a few
           | conversations.
           | 
           | They're improving, probably.
        
             | energy123 wrote:
             | What context size? I ran into issues especially with 80k or
             | more
        
       | indigodaddy wrote:
       | I don't understand the excitement around generating and/or
       | watching AI-produced videos. To me it's probably the single most
       | uninteresting and boring thing related to AI that I can think of.
       | What is the appeal?
        
         | jsphweid wrote:
         | Pretty sure Nano Banana only produces images.
         | 
         | Nonetheless, ask it to "create an infographic on how Google
         | works". Do you not see any excitement in the result? I think
         | it's pretty impressive and has a lot of utility.
        
           | t-writescode wrote:
           | Until people ask it to make convincing misinformation.
           | Pretty, professional looking graphs are already hard to
           | resist.
        
         | tyurok wrote:
         | As a general content I agree it's a bit off putting, but I find
         | it a lot of fun when generating content among friends like
         | internal jokes and educational content. I got my kid to drink
         | some meds by generating an image of a hero telling him it's
         | important to take.
        
         | bitpush wrote:
         | Do you feel the same way about VFX (marvel etc) or animated
         | movies (pixar etc)
        
           | jckahn wrote:
           | I do. I miss practical effects; they were much more
           | entertaining.
        
             | dyauspitr wrote:
             | I don't, they seem campy and reduce the gravitas compared
             | to very well done CGI. Infact, I feel like it has the same
             | effect at poorly done CGI.
        
         | lern_too_spel wrote:
         | Sometimes, an animation is the best way to convey information.
        
         | vagab0nd wrote:
         | Thoughts on photography when it first appeared:
         | 
         | "Not by the taking of a picture of any specific object, but by
         | the way in which any random object could be made to appear on
         | the photographic plate. This was something of such unheard-of
         | novelty that the photographer was delighted by each and every
         | shot he took, and it awakened unknown and overwhelming emotions
         | in him..."
        
       | chaosprint wrote:
       | In my limited testing, at least in terms of maintaining
       | consistency between input and output for Asian faces, it has even
       | regressed.
       | 
       | Actually, Gemini 3 is about the same, and doesn't feel as good as
       | Claude 4.5. I have a feeling it's been fine-tuned for a cool
       | front-end marketing effect.
       | 
       | Furthermore, I really don't understand why AI Studio, now
       | requiring me to use its own API for payment, still adds a
       | watermark.
        
       | vunderba wrote:
       | Alright results are in! I've re-run all my editing based
       | adherence related prompts through Nano Banana Pro. NB Pro managed
       | to successfully pass SHRDLU, the M&M Van Halen test (as verified
       | independently by Simon), and the Scorpio street test - all of
       | which the original NB failed.                 Model results
       | 1. Nano Banana Pro: 10 / 12       2. Seedream4: 9 / 12       3.
       | Nano Banana: 7 / 12       4. Qwen Image Edit: 6 / 12
       | 
       | https://genai-showdown.specr.net/image-editing
       | 
       | If you just want to see how NB and NB Pro compare against each
       | other:
       | 
       | https://genai-showdown.specr.net/image-editing?models=nb,nbp
        
         | Wyverald wrote:
         | thanks, I love your website. Are you planning to do NB Pro for
         | the text-to-image benchmark too?
        
           | vunderba wrote:
           | Definitely! Even though NB's predominant use case seems to be
           | editing, it's still producing surprisingly decent text-to-
           | image results. Imagen4 currently still comes out ahead _in
           | terms of image fidelity_ , but I think NB Pro will close the
           | gap even further.
           | 
           | I'll try to have the generative comparisons for NB Pro up
           | later this afternoon once I catch my breath.
        
           | vunderba wrote:
           | Outside the time frame of being able to edit my original
           | reply, but I've finally re-run the Text-to-Image portion of
           | the site through NB Pro.                 Results
           | gpt-image-1: 10 / 12        Nano Banana Pro: 9 / 12
           | Nano Banana: 8 / 12
           | 
           | It's worth mentioning that even though it only scored
           | slightly better than the original NB, many of the images are
           | significantly better looking.
           | 
           | https://genai-showdown.specr.net?models=nb,nbp
        
             | Wyverald wrote:
             | thanks for the update. One small note: for the d20 test, NB
             | Pro had duplications of 13 and 17 too, not just 19.
        
               | vunderba wrote:
               | Good catch - I've been staring at these images so long
               | day I'm starting to get the equivalent of "Tetris
               | Effect"!
               | 
               | https://en.wikipedia.org/wiki/Tetris_effect
        
             | happyopossum wrote:
             | Awesome test suite. For the maze though, not sure it's fair
             | to knock it for extra dashed lines as the prompt didn't
             | specify that _only_ the correct path should have one...
        
         | humamf wrote:
         | The pisa tower test is really interesting. Many of this prompt
         | have stricter criteria with implicit knowledge and some models
         | impressively pass it. Yet for something as obvious as
         | straightening a slanted object is hard even for latest models.
        
           | kridsdale3 wrote:
           | I suspect there'd be no problem rotating a different object.
           | But this tower is EXTREMELY represented in the training data.
           | It's almost an immutable law of physics that Towers in Pisa
           | are Leaning.
        
             | gridspy wrote:
             | It's also a tower that has famously been deliberately un-
             | straightend just enough to remain a tourist attraction
             | while remaining stable.
        
               | steadicat wrote:
               | What?!? The tower was slightly _straightened_ for safety
               | reasons. It was never intentionally made to lean more.
        
         | sosodev wrote:
         | I think Nano Banana Pro should have passed your giraffe test.
         | It's not a great result but it is exactly what you asked for.
         | It's no worse than Seedream's result imo.
        
           | kevlened wrote:
           | I agree. From where I'm sitting, Seedream just bent the neck
           | while Nano Banana Pro actually shortened the neck.
        
           | vunderba wrote:
           | Yeah I think that's a fair critique. It kind of looks like a
           | bad cut-and-replace job (if you zoom in you can even see part
           | of the neck is missing). I might give it some more _attempts_
           | to see if it can do a better job.
           | 
           | I agree that Seedream could definitely be called out as a
           | fail since it might just be a trick of perspective.
        
             | sefrost wrote:
             | Have you ever considered a "partial pass"?
             | 
             | Perhaps it would be an easy cop out of making a decision if
             | you had to choose something outside of pass/fail.
        
               | vunderba wrote:
               | That's not a bad suggestion. I thought about adding a
               | numerical score but it felt like it was bit overwhelming
               | at the time. Maybe I should revisit it though in the form
               | of:                 Fail = 0 points       Partial = 0.5
               | points       Success = 1 point
               | 
               | There's definitely a couple of pictures where I feel like
               | I'm at the optometrist and somehow failing an eye exam (1
               | or 2, A... or B).
        
               | jofzar wrote:
               | I agree with this, some of those are "passing" and others
               | are really passing. Specially with how much better some
               | of the new model is compared to old ones.
               | 
               | I think the paws one is a good example where I think the
               | new model got 100% while the other was more like 75%
        
           | jonplackett wrote:
           | Yeah it's better than the weirdness of seedream for sure.
        
           | aqme28 wrote:
           | I don't understand at all why Seedream gets a pass there. The
           | neck appears the same length but now it's at a different
           | angle.
        
             | vunderba wrote:
             | Alright I think it's time to concede defeat! Seedream has
             | been summarily demoted to a failure and I've added in the
             | following minimum passing criteria to that particular test:
             | 
             |  _- The giraffe 's neck should be noticeably shorter than
             | in the original image, while still maintaining a natural
             | appearance._
             | 
             |  _- The final image cannot be accomplished by simply
             | cropping out the neck or using perspective changes._
        
         | Nifty3929 wrote:
         | Would you leave one of the originals in each test visible at
         | all times (a control) so that I can see the final image(s) that
         | I'm considering and the original image at the same time?
         | 
         | I guess if you do that then maybe you don't need the cool
         | sliders anymore?
         | 
         | Anyway - thanks so much for all your hard work on this. A very
         | interesting study!
        
         | tylervigen wrote:
         | I think Nano banana pro's answer to the giraffe edit is far
         | superior to the Seedream response, but you passed Seedream and
         | failed NB pro.
         | 
         | Maybe that one is just not a good test?
        
           | tziki wrote:
           | I agree, it seems like Seedream has the neck at same length
           | as Nano Banana but also made the giraffe crouch down, making
           | a major modification to the overall picture.
        
           | strbean wrote:
           | If you look closely, the NBP giraffe has a gaping hole in
           | it's neck.
        
             | IncreasePosts wrote:
             | maybe that's just how his mom built him
        
           | robertwt7 wrote:
           | yeah i agree, the prompt is to "shorten the giraffe's neck
           | length", not to bent it. i feel like the Gemini 3 produces
           | better result on that one
        
           | handsclean wrote:
           | I thought so too at first, but zoom in to where the neck
           | joins the head. What looks like the head's shadow from a
           | distance is actually a hard seam between thick neck and thin
           | neck, with much of the apparent shadow actually a cutout
           | showing the background.
           | 
           | Looks like the Seedream result here has been changed to fail,
           | which I'd agree with, too. Pose change complaints aside, I
           | think that neck is actually the same length were it held
           | straight.
        
         | dyauspitr wrote:
         | Seedream generally looks like low quality outputs and it
         | doesn't seem like you're assigning points for quality. This is
         | only marginally helpful.
        
           | vunderba wrote:
           | That's because, for the most part, I'm not:
           | 
           |  _" A comparison of various SOTA generative image models on
           | specific prompts and challenges with a strong emphasis placed
           | on adherence."_
           | 
           | Adherence is the more interesting problem, in my opinion,
           | because quality issues can be ameliorated through the use of
           | upscalers, refiner models, LoRAs, and similar tools.
           | Furthermore, there are already a thousand existing benchmarks
           | obsessed with visual fidelity.
        
             | dyauspitr wrote:
             | I mean there's a huge difference between a model that
             | throws a black spot on someone's head and another one that
             | fills it with hair indistinguishable from the real thing.
             | Which is why I'm saying this methodology is only marginally
             | useful.
        
         | rl3 wrote:
         | _" Remove all the trash from the street and sidewalk. Replace
         | the sleeping person on the ground with a green street bench.
         | Change the parking meter into a planted tree."_
         | 
         | Three sentences that do a great job summing up modern big tech.
         | The new model even manages to [digitally] remove all trash.
        
           | noduerme wrote:
           | The better to sell you real estate...
        
           | andrepd wrote:
           | Yep, no need for actual urbanism or to worry about the
           | homeless, now governments and realtors can lie to you more
           | conveniently and at an industrial scale! Yay future
        
         | noduerme wrote:
         | I had to look up what a "skifter" is. An AI answer showed that
         | it's Norwegian for a switch.
         | 
         | I'm curious, does the word have a further meaning in the
         | context of _cheating_ at cards?
        
           | vunderba wrote:
           | It's an admittedly obscure reference to a cheating technique
           | used in the Star Wars card game sabacc, which allows a player
           | to surreptitiously switch out a card. I'm pretty sure I
           | picked it up from one of Timothy Zahn's Thrawn books when I
           | was a kid.
           | 
           | But I didn't know it had a meaning in Norwegian, so I guess
           | TIL!
        
             | noduerme wrote:
             | Hah. I loved those Timothy Zahn books. Don't remember that
             | one, though!
        
         | handsclean wrote:
         | Please consider changing pass/fail to an integer score out of
         | maybe 5. This test is becoming more and more misleading as your
         | apparent desire to give due credit conflicts with quality
         | improvements over already ok-ish models. For example, on the
         | great wave Gemini 3's excellent rendition gets no additional
         | credit over Qwen technically not failing if one is generous,
         | and on cards, there's actually no score distinction between
         | results that one could or could not use.
        
         | tiagod wrote:
         | Cool site, thanks! By the way, the "Before" and "After" buttons
         | are swapped.
        
       | Nemi wrote:
       | I feel like I am going crazy or missed something simple but when
       | I use the Gemini app and I ask it to edit a photo that I upload,
       | 2.5 flash works really well but 2.5 pro or 3.0 pro do a very poor
       | job. I uploaded an image of me and asked it to make me bald and
       | flash did a great job of just changing me in the photo but 3.0
       | pro took me out of the photo completely and just created a
       | headshot of a bald man that only sort of resembled me. Am I
       | missing something or does paying for the pro version not give you
       | anything over the 2.5 flash model?
        
         | jiggawatts wrote:
         | The code name "nano banana" model is based on the Flash 2.5
         | foundation. Until today it was the "latest and greatest".
        
       | jjcm wrote:
       | One of the things I've always been curious about is how effective
       | diffusion models can be for web and app design. They're generally
       | trained on more organic photos, but post-training on SDXL and
       | Flux have given me good results here in the past (with the
       | exception of text).
       | 
       | It's been interesting seeing the results of Nano Banana Pro in
       | this domain. Here are a few examples:
       | 
       | Prompt: "A travel planner for an elegant Swiss website for luxury
       | hiking tours. An interactive map with trail difficulty and
       | booking management. Should have a theme that is alpine green,
       | granite grey, glacier white"
       | 
       | Flux output:
       | https://fal.media/files/rabbit/uPiqDsARrFhUJV01XADLw_11cb4d2...
       | 
       | NBP output:
       | https://v3b.fal.media/files/b/panda/h9auGbrvUkW4Zpav1CnBy.pn...
       | 
       | ---
       | 
       | Prompt: "a landing page for a saas crypto website, purple
       | gradient dark theme. Include multiple sections, including one for
       | coin prices, and some graphs of value over time for coins, plus a
       | footer"
       | 
       | Flux output:
       | https://fal.media/files/elephant/zSirai8mvJxTM7uNfU8CJ_109b0...
       | 
       | NBP output:
       | https://v3b.fal.media/files/b/rabbit/1f3jHbxo4BwU6nL1-w6RI.p...
       | 
       | ---
       | 
       | Prompt: "product launch website for a development tool, dark
       | background with aqua blue and neon gold highlights, gradients"
       | 
       | Flux output:
       | https://fal.media/files/zebra/aXg29QaVRbXe391pPBmLQ_4bfa61cc...
       | 
       | NBP output:
       | https://v3b.fal.media/files/b/lion/Rj48BxO2Hg2IoxRrnSs0r.png
       | 
       | ---
       | 
       | Note that this is with a lora I built for flux specifically for
       | website generation. Overall, nbp seems to have less creative /
       | inspired outputs, but the text is FAR better than the fever dream
       | Flux is producing. I'm really excited to see how this changes
       | design. At the very least it proved it can get close to a
       | production quality for output, now it's just about tuning it.
        
       | semiinfinitely wrote:
       | "Talk to your Google One Plan Manager"
       | 
       | wtf
        
       | nhhvhy wrote:
       | Yuck. The last thing the world needs is another slop generator
        
       | 1970-01-01 wrote:
       | The naming is somehow getting worse. I swear we will soon see
       | models that are named just with emojis.
        
       | mogomogo19292 wrote:
       | Still seems to mess up speech bubbles in comic strips
       | unfortunately
        
       | Zenst wrote:
       | When my first thought was of an SBC, then a media AI cloud
       | product was not high up on my guess list.
        
       | user34283 wrote:
       | The visual quality of photorealistic images generated in the
       | Gemini app seems terrible.
       | 
       | Like really ugly. The 1K output resolution isn't great, but on
       | top of that it looks like a heavily compressed JPEG even at 100%
       | viewing size.
       | 
       | Does AI Studio have the same issue? There at least I can see 2K
       | and 4K output options.
        
         | simonw wrote:
         | I have a couple of 25MB PNG 4K images from AI Studio here:
         | 
         | https://drive.google.com/file/d/1QV3pcW1KfbTRQscavNh6ld9PyqG...
         | 
         | https://drive.google.com/file/d/18AzhM-BUZAfLGoHWl6MQW_UW9ju...
        
           | gloosx wrote:
           | At close-up inspection - 8x8 JPEG compression blocks are not
           | going anywhere even with those "4k PNG images"
           | 
           | Seems like a fundamental flaw with image-models is that they
           | will always output something resembling a JPEG
        
       | ionwake wrote:
       | I am extremely impressed by google this week.
       | 
       | I dont want to be annoying, its just a small piece of feedback,
       | but srsly why is it so hard for google to have a simple
       | onboarding experience for paying customers?
       | 
       | In the past I spoke about how my whole startup got taken offline
       | for days because I "upgraded" to paying, and that was a decade
       | ago. I mean it cant be hard, other companies dont have these
       | issues!
       | 
       | Im sure it will be fixed in time, its just a bit bizarre. Maybe
       | its just not enough time spent on updating legacy systems between
       | departments or something.
        
       | AmbroseBierce wrote:
       | 2D animators can still feel safe about their job, I asked it to
       | generate a sprite sheet animation by giving it the final frame of
       | the animation (as a PNG file) and asking in detail what I wanted
       | in the spritesheet, it just gave me mediocre results, I asked for
       | 8 frames and it just repeated a bunch of poses just to reach that
       | number instead of doing what a human would have done with the
       | same request, meaning the in-betweens to make the animation
       | smoother (AKA interpolations)
        
         | Yokohiii wrote:
         | With local models you can use control net, which is simply
         | speaking, the model trying to adhere to a given
         | wireframe/openpose. Which is more likely to give you an stable
         | result. I have no experience with it, just wanted to point out
         | that there is tooling that is more advanced.
        
         | red75prime wrote:
         | At least until someone decides to fine-tune a general purpose
         | model to the task of animation.
        
           | BoorishBears wrote:
           | Yeah reading this I was thinking, we've got Qwen-Image-Edit
           | which is an image model with an LLM backbone that takes well
           | to finetuning.
           | 
           | I'd be surprised if you can't get a 80%/20% result in a
           | weekend, and even that probably saves you some time if you're
           | just willing to pick best-of-n results
        
             | AmbroseBierce wrote:
             | The person behind www.pixellab.ai has been trying to make a
             | SaaS out of that idea for about 2 years already and it just
             | isn't there, the examples in the homepage are extremely
             | cherry-picked, I bet most of their paying customers just
             | use it as a starting point and then spend hours manually
             | fix the sprites, which may be more than enough value for
             | $12 a month and that's great but what is shows is that we
             | are not as close as one would like to imagine, the "one leg
             | in front of the other at about the same depth and the same
             | color" is still problematic to this day; if most pants in
             | the world had a different color for each leg I bet most of
             | its animation issues would be solved, unfortunately we
             | don't and most of the training data involves single-color
             | pants/legs.
        
               | BoorishBears wrote:
               | At the risk of sounding unfairly stubborn about something
               | I'm not _that_ familiar with, if they 've been at it for
               | 2 years I'm imagining a very different (much more
               | difficult) pipeline than fine-tuning an image model with
               | an LLM backbone
               | 
               | The jump in understanding that having a full sized LLM
               | behind the generations enables here is massive:
               | https://ghost.oxen.ai/fine-tuned-qwen-image-edit-vs-nano-
               | ban...
        
         | delbronski wrote:
         | I've been using the same test since Dalle 2. No model has
         | passed it yet.
         | 
         | However, I don't think 2D animators should feel too safe about
         | their jobs. While these models are bad at creating sprite
         | sheets in one go, there are ways you can use them to create
         | pretty decent sprite sheets.
         | 
         | For example, I've had good results by asking for one frame at a
         | time. Also had good results by providing a sprite sheet of a
         | character jumping, and then an image of a new character, and
         | then asking for the same sprite sheet but with the new
         | character.
        
         | robots0only wrote:
         | the problem here is that text as the communication interface is
         | not good for this. the model should be reasoning in the pose
         | space (and generally in more geometric spaces), then
         | interpolation and drawing is pretty easy. I think this will
         | happen in some time.
        
         | dyauspitr wrote:
         | However, if you ask it to generate eight or 10 frames of a
         | sprite performing a particular action from scratch it gets it
         | pretty spot on. In fact, you can drop them straight into an
         | animator and have near production quality.
        
         | user34283 wrote:
         | When I tried the same with video models a few months ago by
         | extracting the frames, it was not working so well either.
         | 
         | However, this should be solvable in the near future.
         | 
         | I'm looking forward to making some 2D games.
        
       | joshhart wrote:
       | This is super awesome, but how in the world did they come up with
       | a name "Nano Banana Pro"? It sounds like an April Fools joke.
        
         | jameslk wrote:
         | It was an internal codename that leaked out and then despite
         | trying to use a more corporate-friendly name that was terribly
         | boring (Gemini 2.5 Flash Image), they got trolled into
         | continuing to use nano banana because nobody would stop calling
         | it that. Or that's how the lore has been told so far
         | 
         | I wouldn't be surprised if Google shortens the name to NBP in
         | the future, hoping everyone collectively forgets what NB stood
         | for. And then proceeds to enshittify the name to something like
         | Google NBP 18.5 Hangouts Image Editor
        
       | al_be_back wrote:
       | A houseplant with tiny turtles for leaves... very informative if
       | under the influence of some substances.
       | 
       | It's not a Hello World equivalent.
       | 
       | So much around generative ai seems to be around "look how
       | unrealistic you can be for not-cheap! Ai - cocaine for your
       | machine!!"
       | 
       | No wonder there's very little uptake by businesses (MIT state of
       | ai 2025, etc)
        
       | weagle05 wrote:
       | Gemini is all over the place for me. Nano Banana produces some
       | great images. Today I asked Gemini to design a graphic based on
       | the first sheet in a Google sheet. It produced a graphic with a
       | summary of the data and a picture of a bed sheet. Nailed it.
        
       | tianshuo wrote:
       | It's great to know that Nano Banana pro get's multiple items of
       | my impossible AIGC benchmark
       | done....https://github.com/tianshuo/Impossible-AIGC-Benchmark
        
       | funny_ai wrote:
       | With this model, I'm more worried about future online fraud. Will
       | there still be authenticity?
        
       | into_the_void wrote:
       | Is SynthID actually running an AI classifier to decide whether an
       | image is model-generated, or is it only checking for an embedded
       | watermark? If it's a classifier, the accuracy is questionable --
       | generic "AI detection" tools tend to produce high false-positive
       | rates. Also unclear whether it's doing semantic anomaly checks
       | (extra fingers, physics errors) or low-level pixel-signature
       | analysis.
        
         | semiquaver wrote:
         | Watermark
        
       | alams wrote:
       | Google is able to churn up SOTA models across the board. But
       | still could not figure out the basic user journey. No Joke!
        
       | gloosx wrote:
       | Generated images still contain JPEG artifacts all over them.
       | 
       | We are not doomed yet - can pretty much reliably spot RAW image
       | vs AI-generated image by just zooming in
        
         | M4v3R wrote:
         | It's only a matter of time this will be fixed, also there
         | probably already are custom LoRAs that can remove jpeg
         | artifacts. So it's not a matter of if, only when.
        
           | gloosx wrote:
           | I dont think so. You cant train away a compression artifact
           | that comes from the model's core architecture, LoRAs can
           | smooth or hide artifacts, but some detail will be inevitably
           | lost. You can try to hide artifacts but not remove them
           | without retraining the whole model on RAW sensor data.
        
       | Spacemolte wrote:
       | "Sorry, I'm still learning to create images for you, so I can't
       | do that yet. I can try to find one on the web though."
        
       | atom-morgan wrote:
       | Anyone know how to use this with Google Slides? I don't see it
       | anywhere in app.
        
       | piokoch wrote:
       | The funny part is that Google puts watermark on the generated
       | graphics, because they are oh so not evil and socially
       | responsible.
       | 
       | Unless you pay Google more, what is mentioned at the very bottom
       | of this infomercial.
       | 
       | "Recognizing the need for a clean visual canvas for professional
       | work, we will remove the visible watermark from images generated
       | by Google AI Ultra subscribers and within the Google AI Studio
       | developer tool."
       | 
       | BTW: anyone with the skills found in 1 min on the Internet can
       | remove all of those ids, etc. (yes, as you might guess, the
       | website is called remove synth id dot com...)
        
       | mark_l_watson wrote:
       | I used the new Nano Banana Pro just now, indirectly. I was
       | brainstorming with Gemini 3 Thinking mode (now the default best
       | thinking option on my iPadOS Gemini app) over a system design for
       | an open source project that I hope to put a lot of effort into
       | next year and then I asked for a detailed system level diagram.
       | 
       | The results were very good because the diagram reflected what I
       | had specified during chat.
       | 
       | I probably sounded like an idiot when Gemini 3 was released: I
       | have been a paid 'AI practitioner' since 1982, lived through
       | multiple AI winters, but I wrote this week that Gemini 3 meets my
       | personal expectations for AGI for the non-physical (digital)
       | world.
        
       | adv0r wrote:
       | you know whats annoying? each iteration the quality of the first
       | original image gets worse and worse until it loses resolution ,
       | details etc.
        
       ___________________________________________________________________
       (page generated 2025-11-21 23:01 UTC)