[HN Gopher] Gemini CLI
       ___________________________________________________________________
        
       Gemini CLI
        
       GitHub: https://github.com/google-gemini/gemini-cli
        
       Author : sync
       Score  : 857 points
       Date   : 2025-06-25 13:10 UTC (9 hours ago)
        
 (HTM) web link (blog.google)
 (TXT) w3m dump (blog.google)
        
       | thor-rodrigues wrote:
       | Link of announcement blog post:
       | https://blog.google/technology/developers/introducing-gemini...
        
       | sync wrote:
       | These always contain easter eggs. I got some swag from Claude
       | Code, and as suspected, Gemini CLI includes `/corgi` to activate
       | corgi mode.
        
         | GavCo wrote:
         | They sent you swag in the mail? How did that work?
        
           | sync wrote:
           | Yeah, I'm not sure if it's still there (their source code is
           | increasingly obfuscated) but if you check out the source for
           | the first public version (0.2.9) you'll see the following:
           | Sends the user swag stickers with love from
           | Anthropic.",bq2=`This tool should be used whenever a user
           | expresses interest in receiving Anthropic or Claude stickers,
           | swag, or merchandise. When triggered, it will display a
           | shipping form for the user to enter their mailing address and
           | contact details. Once submitted, Anthropic will process the
           | request and ship stickers to the provided address.
           | Common trigger phrases to watch for:         - "Can I get
           | some Anthropic stickers please?"         - "How do I get
           | Anthropic swag?"         - "I'd love some Claude stickers"
           | - "Where can I get merchandise?"         - Any mention of
           | wanting stickers or swag                  The tool handles
           | the entire request process by showing an interactive form to
           | collect shipping information.
        
             | 9cb14c1ec0 wrote:
             | Just tried it. Doesn't work anymore.
        
       | ed_mercer wrote:
       | > That's why we're introducing Gemini CLI
       | 
       | Definitely not because of Claude Code eating our lunch!
        
         | troupo wrote:
         | And since they have essentially unlimited money they can offer
         | a lot for free/cheaply, until all competitors die out, and then
         | they can crank up the prices
        
           | pzo wrote:
           | yeah we already seen this with gemini 2.5 flash. Gemini 2.0
           | is such a work horse for API model with great price. Gemini
           | 2.5 flash lite same price but is not as good except math and
           | coding (very niche use case for API key)
        
         | unshavedyak wrote:
         | Yea, i'm not even really interested in Gemini atm because last
         | i tried 2.5 Pro it was really difficult to shape behavior. It
         | would be too wordy, or offer too many comments, etc - i
         | couldn't seem to change some base behaviors, get it to focus on
         | just one thing.
         | 
         | Which is surprising because at first i was ready to re-up my
         | Google life. I've been very anti-Google for ages, but at first
         | 2.5 Pro looked so good that i felt it was a huge winner. It
         | just wasn't enjoyable to use because i was often at war with
         | it.
         | 
         | Sonnet/Opus via Claude Code are definitely less intelligent
         | than my early tests of 2.5 Pro, but they're reasonable, listen,
         | stay on task and etc.
         | 
         | I'm sure i'll retry eventually though. Though the subscription
         | complexity with Gemini sounds annoying.
        
           | ur-whale wrote:
           | > It would be too wordy, or offer too many comments
           | 
           | Wholeheartedly agree.
           | 
           | Both when chatting in text mode or when asking it to produce
           | code.
           | 
           | The verbosity of the code is the worse. Comments often longer
           | than the actual code, every nook and cranny of an algorithm
           | unrolled over 100's of lines, most of which unnecessary.
           | 
           | Feels like typical code a mediocre Java developer would
           | produce in the early 2000's
        
             | porridgeraisin wrote:
             | > Feels like typical code a mediocre Java developer would
             | produce in the early 2000's
             | 
             | So, google's codebase
        
               | handfuloflight wrote:
               | You were intimate with that?
        
           | sirn wrote:
           | I've found that Gemini 2.5 Pro is pretty good at analyzing
           | existing code, but really bad at generating a new code. When
           | I use Gemini with Aider, my session usually went like:
           | Me: build a plan to build X         Gemini: I'll do A, B, and
           | C to achieve X         Me: that sounds really good, please do
           | Gemini: <do A, D, E>         Me: no, please do B and C.
           | Gemini: I apologize. <do A', C, F>         Me: no! A was
           | already correct, please revert. Also do B and C.
           | Gemini: <revert the code to A, D, E>
           | 
           | Whereas Sonnet/Opus on average took me more tries to get it
           | to the implementation plan that I'm satisfied with, but it's
           | so much easier to steer to make it produce the code that I
           | want.
        
             | 0x457 wrote:
             | When I use amazon-q for this, I make it write a plan into a
             | markdown file, then I clear context and tell it to read
             | that file and execute that plan phase by phase. This is
             | with Sonnet 4.
             | 
             | Sometimes I also yeet that file to Codex and see which
             | implementation is better. Clear context, read that file
             | again, give it a diff that codex produce and tell it do a
             | review.
        
         | jstummbillig wrote:
         | I find it hard to imagine that any of the major model vendors
         | are suffering from demand shortages right now (if that's what
         | you mean?)
         | 
         | If you mean: This is "inspired" by the success of Claude Code.
         | Sure, I guess, but it's also not like Claude Code brought
         | anything entirely new to the table. There is a lot of copying
         | from each other and continually improving upon that, and it's
         | great for the users and model providers alike.
        
           | coolKid721 wrote:
           | ai power users will drop shit immediately, yes they probably
           | have long term contracts with companies but anyone seriously
           | engaged has switched to claude code now (probably including
           | many devs AT openai/google/etc.)
           | 
           | If you don't think claude code is just miles ahead of other
           | things you haven't been using it (or well)
           | 
           | I am certain they keep metrics on those "power users"
           | (especially since they probably work there) and when everyone
           | drops what they were using and moves to a specific tool that
           | is something they should be careful of.
        
       | htrp wrote:
       | symptomatic of Google's lack of innovation and pm's rushing to
       | copy competitor products
       | 
       | better question is why do you need a modle specific CLI when you
       | should be able to plug in to individual models.
        
         | jvanderbot wrote:
         | Aider is what you want for that.
        
         | wagslane wrote:
         | check out opencode by sst
        
         | shmoogy wrote:
         | If Claude code is any indication it's because they can tweak it
         | and dogfood to extract maximum performance from it. I strongly
         | prefer Claude code to aider - irrespective of the max plan.
         | 
         | Haven't used Jules or codex yet since I've been happy and am
         | working on optimizing my current workflow
        
       | poszlem wrote:
       | The killer feature of Claude Code is that you can just pay for
       | Max and not worry about API billing. It lets me use it pretty
       | much all the time without stressing over every penny or checking
       | the billing page. Until they do that - I'm sticking with Claude.
        
         | mhb wrote:
         | How does that compare to using aider with Claude models?
        
           | adamcharnock wrote:
           | I did a little digging into this just yesterday. The
           | impression I got was that Claude Code was pretty great, but
           | also used a _lot_ more tokens than similar work using aider.
           | Conversations I saw stated 5-10x more.
           | 
           | So yes with Claude Code you can grab the Max plan and not
           | worry too much about usage. With Aider you'll be paying per
           | API call, but it will cost quite a bit less than the similar
           | work if using Claude Code in API-mode.
           | 
           | I concluded that - for me - Claude Code _may_ give me better
           | results, but Aider will likely be cheaper than Claude Code in
           | either API-mode or subscription-mode. Also I like that I
           | really can fill up the aider context window if I want to, and
           | I'm in control of that.
        
             | bananapub wrote:
             | > I concluded that - for me - Claude Code _may_ give me
             | better results, but Aider will likely be cheaper than
             | Claude Code in either API-mode or subscription-mode.
             | 
             | I'd be pretty surprised if that was the case - something
             | like ~8 hours of Aider use against Claude can spend $20,
             | which is how much Claude Pro costs.
        
               | adamcharnock wrote:
               | Indeed, I think I came to the incorrect conclusion! Just
               | signed up for a subscription after getting through quite
               | a lot of API funds!
        
           | therealmarv wrote:
           | Using Claude models in aider burns tokens you need to top up.
           | With Claude Max subscription you can pay a 100 or 200 USD per
           | month plan and use their internal tool claude code without
           | the need to buy additional pay as you go tokens. You get a
           | "flatrate", the higher plan gives you more usage with less
           | rate limiting.
        
           | Karrot_Kream wrote:
           | Aider and Claude Code/Gemini CLI agentic stuff operate
           | differently.
           | 
           | You can think of Aider as being a semi-auto LLM process.
           | First you ask it to do something. It goes through a generate
           | -> reflect -> refine loop until it feels like it has achieved
           | the goal you give it. Aider has a reflection limit so it'll
           | only do this loop a limited number of times and then it will
           | add/remove the code that it deems fit. Then it'll give you
           | instructions to run. You can run those instructions (e.g. to
           | actually run a script) and then append the results from the
           | run into the context to get it to fix any issues, but this is
           | optional. What you send in the context and what you ask the
           | models to do are in your hands. This makes iteration slower
           | and the LLM does less but it also can potentially keep costs
           | lower depending on what you delegate to the LLM and how often
           | you iterate with it.
           | 
           | Claude Code, Codex, and I suspect Gemini CLI on the other
           | hand will autonomously run your code then use the output to
           | continue refining its approach autonomously until the goal is
           | reached. This can consume many more tokens, potentially, than
           | hand guiding Aider, because its potential for iteration is so
           | much longer. But Claude Code and the like also need a lot
           | less direction to make progress. You can, for example, ask it
           | to do a big refactor and then just leave to lunch and come
           | back to see if the refactor is done. Aider will require
           | babying the whole way.
        
         | therealmarv wrote:
         | That's a golden cage and you limit yourself to Anthropic only.
         | 
         | I'm happy I can switch models as I like with Aider. The top
         | models from different companies see different things in my
         | experiences and have their own strengths and weaknesses. I also
         | do not see Anthropic's models on the top of my (subjective)
         | list.
        
         | jedi3335 wrote:
         | No per-token billing here either: "...we offer the industry's
         | largest allowance: 60 model requests per minute and 1,000
         | requests per day at no charge."
         | 
         | https://blog.google/technology/developers/introducing-gemini...
        
           | thimabi wrote:
           | Don't know about Claude, but usually Google's free offers
           | have no privacy protections whatsoever -- all data is kept
           | and used for training purposes, including manual human
           | review.
        
         | unshavedyak wrote:
         | Same. Generally i really prefer Claude Code's UX (CLI based,
         | permissions, etc) - it's all generally close to right for me,
         | but not perfect.
         | 
         | However i didn't use Claude Code before the Max plan because i
         | just fret about some untrusted AI going ham on some stupid
         | logic and burning credits.
         | 
         | If it's dumb on Max i don't mind, just some time wasted. If
         | it's dumb on credits, i just paid for throw away work. Mentally
         | it's just too much overhead for me as i end up worrying about
         | Claude's journey, not just the destination. And the journey is
         | often really bad, even for Claude.
        
         | rusk wrote:
         | This insistence by SAAS vendors upon not protecting you from
         | financial ruin must surely be some sort of deadweight loss.
         | 
         | Sure you might make a few quick wins from careless users but
         | overall it creates an environment of distrust where users are
         | watching their pennies and lots are even just standing off.
         | 
         | I can accept that with all the different moving parts this may
         | be a trickier problem than a pre paid pump, or even a Telco,
         | and while to a product manager this might look like a lot of
         | work/money for something that "prevents" users overspending.
         | 
         | But we all know that's shortsighted and stupid and its the kind
         | of thinking that broadly signals more competition is required.
        
         | fhinkel wrote:
         | If you use your personal gmail account without billing enabled,
         | you get generous requests and never have to worry about a
         | surprise bill.
        
           | indigodaddy wrote:
           | If I have a CC linked to my personal Google for my Google One
           | storage and YouTube Premium, that doesn't make me "billing
           | enabled" for Gemini CLI does it?
        
       | lherron wrote:
       | Hope this will pressure Anthropic into releasing Claude Code as
       | open source.
        
         | zackify wrote:
         | What's neat is we can proxy requests from Gemini or fork it
         | with only replacing the api call layer so it can be used with
         | local models!!!
        
         | willsmith72 wrote:
         | As a heavy Claude code user that's not really a selling point
         | for me
         | 
         | Ultimately quality wins out with LLMs. Having switched a lot
         | between openai, google and Claude, I feel there's essentially 0
         | switching cost and you very quickly get to feel which is the
         | best. So until Claude has a solid competitor I'll use it, open
         | source or not
        
           | lherron wrote:
           | Even if you don't care about open source, you should care
           | about all the obfuscation happening in the prompts/models
           | being used by Cursor/Claude Code/etc. With everything hidden,
           | you could be paying 200/mo and get served Haiku instead of
           | Sonnet/Opus. Or you could be getting 1k tokens of your code
           | inserted as context instead of 100k to save on inference
           | costs.
        
             | willsmith72 wrote:
             | so what? I care about the quality of the result. They can
             | do that however they want
             | 
             | A more credible argument is security and privacy, but I
             | couldn't care less if they're managing to be best in class
             | using haiku
        
             | vermarish wrote:
             | They made Claude Code available on their $20/month plan
             | about two weeks ago. Your point still stands, of course.
        
             | handfuloflight wrote:
             | If what they're serving me is Haiku, then give me more
             | Haiku.
        
         | fhinkel wrote:
         | I love healthy competition that leads to better use experiences
        
       | asadm wrote:
       | I have been using this for about a month and it's a beast, mostly
       | thanks to 2.5pro being SOTA and also how it leverages that huge
       | 1M context window. Other tools either preemptively compress
       | context or try to read files partially.
       | 
       | I have thrown very large codebases at this and it has been able
       | to navigate and learn them effortlessly.
        
         | zackify wrote:
         | When I was using it in cursor recently, I found it would break
         | imports in large python files. Claude never did this. Do you
         | have any weird issues using Gemini? I'm excited to try the cli
         | today
        
           | asadm wrote:
           | not at all. these new models mostly write compiling code.
        
             | tvshtr wrote:
             | Depends on the language. It has some bugs where it replaces
             | some words with Unicode symbols like (c). And is completely
             | oblivious to it even when pointed out.
        
         | _zoltan_ wrote:
         | what's your workflow?
        
       | iandanforth wrote:
       | I love how fragmented Google's Gemini offerings are. I'm a Pro
       | subscriber, but I now learn I should be a "Gemini Code Assist
       | Standard or Enterprise" user to get additional usage. I didn't
       | even know that existed! As a run of the mill Google user I get a
       | generous usage tier but paying them specifically for "Gemini"
       | doesn't get me anything when it comes to "Gemini CLI".
       | Delightful!
        
         | bayindirh wrote:
         | There's also $300/mo AI ULTRA membership. It's interesting.
         | Google One memberships even can't detail what "extra features"
         | I can have, because it possibly changes every hour or so.
        
           | Keyframe wrote:
           | > There's also $300/mo AI ULTRA membership
           | 
           | Not if you're in EU though. Even though I have zero or less
           | AI use so far, I tinker with it. I'm more than happy to pay
           | $200+tax for Max 20x. I'd be happy to pay same-ish for Gemini
           | Pro.. if I knew how and where to have Gemini CLI like I do
           | with Claude code. I have Google One. WHERE DO I SIGN UP, HOW
           | DO I PAY AND USE IT GOOGLE? Only thing I have managed so far
           | is through openrouter via API and credits which would amount
           | to thousands a month if I were to use it as such, which I
           | won't do.
           | 
           | What I do now is occasionally I go to AI Studio and use it
           | for free.
        
           | SecretDreams wrote:
           | Maybe their products team is also just run by Gemini, and
           | it's changing its mind every day?
           | 
           | I also just got the email for Gemini ultra and I couldn't
           | even figure out what was being offered compared to pro
           | outside of 30tb storage vs 2tb storage!
        
             | ethbr1 wrote:
             | > _Maybe their products team is also just run by Gemini,
             | and it 's changing its mind every day?_
             | 
             | Never ascribe to AI, that which is capable of being borked
             | by human PMs.
        
         | gavinray wrote:
         | I actually had this exact same question when I read the docs,
         | made an issue about it:
         | 
         | https://github.com/google-gemini/gemini-cli/issues/1427
        
         | 3abiton wrote:
         | And they say our scale up is siloed. Leave it to google to
         | show' em.
        
         | nojito wrote:
         | You don't get API keys for that subscription because it's a
         | flat monthly cost.
        
           | iandanforth wrote:
           | That's not a given, Anthropic recently added Claude CLI
           | access to their $20/m "Pro" plan removing the need for a
           | separate API key.
        
         | behnamoh wrote:
         | Actually, that's the reason a lot of startups and solo
         | developers prefer non-Google solutions, even though the quality
         | of Gemini 2.5 Pro is insanely high. The Google Cloud Dashboard
         | is a mess, and they haven't fixed it in years. They have Vertex
         | that is supposed to host some of their models, but I don't
         | understand what's the difference between that and their own
         | cloud. And then you have two different APIs depending on the
         | level of your project: This is literally the opposite of what
         | we would expect from an AI provider where you start small and
         | regardless of the scale of your project, you do not face
         | obstacles. So essentially, Google has built an API solution
         | that does not scale because as soon as your project gets
         | bigger, you have to switch from the Google AI Studio API to the
         | Vertex API. And I find it ridiculous because their OpenAI
         | compatible API does not work all the time. And a lot of tools
         | that rely on that actually don't work.
         | 
         | Google's AI offerings that should be simplified/consolidated:
         | 
         | - Jules vs Gemini CLI?
         | 
         | - Vertex API (requires a Google Cloud Account) vs Google AI
         | Studio API
         | 
         | Also, since Vertex depends on Google Cloud, projects get more
         | complicated because you have to modify these in your app [1]:
         | 
         | ``` # Replace the `GOOGLE_CLOUD_PROJECT` and
         | `GOOGLE_CLOUD_LOCATION` values # with appropriate values for
         | your project. export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
         | export GOOGLE_CLOUD_LOCATION=global export
         | GOOGLE_GENAI_USE_VERTEXAI=True ```
         | 
         | [1]: https://cloud.google.com/vertex-ai/generative-
         | ai/docs/start/...
        
           | tarvaina wrote:
           | It took me a while but I think the difference between Vertex
           | and Gemini APIs is that Vertex is meant for existing GCP
           | users and Gemini API for everyone else. If you are already
           | using GCP then Vertex API works like everything else there.
           | If you are not, then Gemini API is much easier. But they
           | really should spell it out, currently it's really confusing.
           | 
           | Also they should make it clearer which SDKs, documents,
           | pricing, SLAs etc apply to each. I still get confused when I
           | google up some detail and end up reading the wrong document.
        
             | fooster wrote:
             | The other difference is that reliability for the gemini api
             | is garbage, whereas for vertex ai it is fantastic.
        
               | nikcub wrote:
               | The key to running LLM services in prod is setting up
               | Gemini in Vertex, Anthropic models on AWS Bedrock and
               | OpenAI models on Azure. It's a completely different world
               | in terms of uptime, latency and output performance.
        
             | nprateem wrote:
             | Which would all be fine except some models like Imagen 4
             | only work on vertex.
        
           | coredog64 wrote:
           | At least a bunch of people got promotions for demonstrating
           | scope via the release of a top-level AI product.
        
           | cperry wrote:
           | @sachinag is afk but wanted me to flag that he's on point for
           | fixing the Cloud Dashboard - it's WIP!
        
             | WXLCKNO wrote:
             | You guys should try my AGI test.
             | 
             | It's easy, you just ask the best Google Model to create a
             | script that outputs the number of API calls made to the
             | Gemini API in a GCP account.
             | 
             | 100% fail rate so far.
        
               | kridsdale3 wrote:
               | To be fair, no human can do this either.
        
             | sachinag wrote:
             | Thanks Chris!
             | 
             | "The Google Cloud Dashboard is a mess, and they haven't
             | fixed it in years." Tell me what you want, and I'll do my
             | best to make it happen.
             | 
             | In the interim, I would also suggest checking out Cloud Hub
             | - https://console.cloud.google.com/cloud-hub/ - this is us
             | really rethinking the level of abstraction to be higher
             | than the base infrastructure. You can read more about the
             | philosophy and approach here:
             | https://cloud.google.com/blog/products/application-
             | developme...
        
               | behnamoh wrote:
               | One more suggestion: Please remove the need to make a
               | project before we can use Gemini API. That seriously
               | impedes our motivation in using Gemini for one-off
               | scripts and proof-of-concept products where creating a
               | project is overkill.
               | 
               | Ideally what I want is this: I google "gemini api" and
               | that leads me to a page where I can login using my Google
               | account and see the API settings. I create one and start
               | using it right away. No extra wizardry, no multiple
               | packages that must be installed, just the gemini package
               | (no gauth!) and I should be good to go.
        
               | sitkack wrote:
               | That will never happen. Just make a scrub project that is
               | your misc-dev-drawer.
        
               | sachinag wrote:
               | Totally fair. Yes, Google AI Studio [
               | https://aistudio.google.com ] lets you do this but Google
               | Cloud doesn't at this time. That's super duper
               | irritating, I know.
        
               | dieortin wrote:
               | AFAIK you can very easily get an API key from AI studio
               | without creating any cloud project
        
               | behnamoh wrote:
               | read my comment above. G Studio API is limited.
        
             | plaidfuji wrote:
             | I will say as someone who uses GCP as an enterprise user
             | and AI Studio in personal work, I was also confused about
             | what Google AI Studio actually was at first. I was trying
             | to set up a fork of Open NotebookLM and I just blindly
             | followed Cursor's guidance on how to get a GOOGLE_API_KEY
             | to run text embedding API calls. Seems that it just created
             | a new project under my personal GCP account, but without
             | billing set up. I think I've been successfully getting
             | responses without billing but I don't know when that will
             | run out.. suppose I'll get some kind of error response if
             | that happens..
             | 
             | I think I get why AI Studio exists, seems it enables people
             | to prototype AI apps while hiding the complexity of the GCP
             | console, despite the fact that (I assume) most AI Studio
             | api calls are routed through Vertex in some way. Maybe it's
             | just confusing precisely because I've used GCP before.
        
           | irthomasthomas wrote:
           | I just use gemini-pro via openrouter API. No painful clicking
           | around on the cloud to find the billing history.
        
             | behnamoh wrote:
             | but you won't get the full API capabilities of Gemini (like
             | setting the safety level).
        
         | __MatrixMan__ wrote:
         | Anthropic is the same. Unless it has changed within the last
         | few months, you can subscribe to Claude but if you want to use
         | Claude Code it'll come out of your "API usage" bucket which is
         | billed separately than the subscription.
         | 
         | Some jerk has learned that we prefer CLI things and has come to
         | the conclusion that we should therefore pay extra for them.
         | 
         | Workaround is to use their GUI with some MCPs but I dislike it
         | because window navigation is just clunky compared to terminal
         | multiplexer navigation.
        
           | gnur wrote:
           | This has changed actually, since this month you can use
           | claude code if you have a cloud pro subscription.
        
             | __MatrixMan__ wrote:
             | Great news, thanks.
        
           | trostaft wrote:
           | AFAIK, Claude code operates on your subscription, no? That's
           | what this support page says
           | 
           | https://support.anthropic.com/en/articles/11145838-using-
           | cla...
           | 
           | Could have changed recently. I'm not a user so I can't
           | verify.
        
             | re5i5tor wrote:
             | In recent research (relying on Claude so bear that in
             | mind), connecting CC via Anthropic Console account / API
             | key ends up being less expensive.
        
               | CGamesPlay wrote:
               | There's a log analyzer tool that will tell you how much
               | the API costs are for your usage: https://ccusage.com
        
               | SparkyMcUnicorn wrote:
               | If you're doing anything more than toying around, this is
               | not the case.
               | 
               | Using the API would have cost me $1200 this month, if I
               | didn't have a subscription.
               | 
               | I'm a somewhat extensive user, but most of my coworkers
               | are using $150-$400/month with the API.
        
               | willsmith72 wrote:
               | less expensive than what? You can use CC on the $20 plan.
               | If you're using the maximum of your $20 subscription
               | usage every 4 hours every day, the equivalent API cost
               | would be at least hundreds per month
        
           | kissgyorgy wrote:
           | This is simply not true. All personal paid packages include
           | Claude Code now.
        
             | indigodaddy wrote:
             | Are you using CC for your python framework?
        
           | unshavedyak wrote:
           | In addition to others mentioning subscriptions being better
           | in Claude Code, i wanted to compare the two so i tried to
           | find a Claude Max equivalent license... i have no clue how.
           | In their blog post they mention `Gemini Code Assist Standard
           | or Enterprise license` but they don't even link to it.. lol.
           | 
           | Some googling lands me to a _guide_ :
           | https://cloud.google.com/gemini/docs/discover/set-up-
           | gemini#...
           | 
           | I stopped there because i don't want to signup i just wanted
           | to review, but i don't have an admin panel or etc.
           | 
           | It feels insane to me that there's a readme on how to give
           | them money. Claude's Max purchase was just as easy as Pro,
           | fwiw.
        
           | Workaccount2 wrote:
           | I think it is pretty clear that these $20/subs are loss
           | leaders, and really only meant to get regular people to
           | really start leaning on LLMs. Once they are hooked, we will
           | see what the actual price of using so much compute is. I
           | would imagine right now they are pricing their APIs either at
           | cost or slightly below.
        
             | ethbr1 wrote:
             | Or they're planning on the next wave of optimized hardware
             | cutting inference costs.
        
             | stpedgwdgfhgdd wrote:
             | When using a single terminal Pro is good enough (even with
             | a medium-large code base). When I started working with two
             | terminals at two different issues at the same time, i'm
             | reaching the credit limit.
        
             | sebzim4500 wrote:
             | I'm sure that there are power users who are using much more
             | than $20 worth of compute, but there will also be many
             | users who pay but barely use the service.
        
               | upcoming-sesame wrote:
               | since its bundled with Google One storage, which most
               | people (into Google's ecosystem) buy anyway, the price is
               | actually less than 20
        
             | dzhiurgis wrote:
             | Sam Altman said they use about same amount of power as an
             | oven. So at $0.2/kWh thats about 100kWh/4kW=25 hours of
             | compute or a little over an hour every workday.
        
           | carefulfungi wrote:
           | This is down voted I guess because the circumstances have
           | changed - but boy is it still confusing. All these platforms
           | have chat subscriptions, api pay-as-you-go, CLI subscriptions
           | like "claude code" ... built-in offers via Github enterprise
           | or Google Workspace enterprise ...
           | 
           | It's a frigg'n mess. Everyone at our little startup has spent
           | time trying to understand what the actual offerings are; what
           | the current set of entitlements are for different products;
           | and what API keys might be tied to what entitlements.
           | 
           | I'm with __MatrixMan__ -- it's super confusing and needs some
           | serious improvements in clarity.
        
             | justincormack wrote:
             | And claude code can now be connected to either an API sub
             | or a chat sub apparently.
        
               | innagadadavida wrote:
               | I found out about this the hard way after blowing $200 in
               | 2 days. /logout and start over and you will get the
               | option to link to monthly pricing plan
        
           | HarHarVeryFunny wrote:
           | Isn't that a bit like saying that gasoline should be sold as
           | a fixed price subscription rather than a usage based scheme
           | where long distance truckers pay more than someone driving <
           | 100 miles per week?
           | 
           | A ChatBot is more like a fixed-price buffet where usage is
           | ultimately human limited (even if the modest eaters are still
           | subsidizing the hogs). An agentic system is going to consume
           | resources in much more variable manner, depending on how it
           | is being used.
           | 
           | > Some jerk has learned that we prefer CLI things and has
           | come to the conclusion that we should therefore pay extra for
           | them
           | 
           | Obviously these companies want you to increase the amount of
           | their product you consume, but it seems odd to call that a
           | jerk move! FWIW, Anthropic's stated motivation for Claude
           | Code (which Gemini is now copying) was be agnostic to your
           | choice of development tools since CLI access is pretty much
           | ubiquitous, even inside IDEs. Whether it's the CLI-based
           | design, the underlying model, or the specifics of what Claude
           | Code is capable of, they seem to have got something right,
           | and apparently usage internal to Anthropic skyrocketed just
           | based on word of mouth.
        
             | __MatrixMan__ wrote:
             | Claude desktop editing files and running commands via the
             | desktop commander MCP is pretty much equivalent
             | functionality wise to Claude Code. I can set both of them
             | to go, make tea, and come back to see that they're still
             | cranking after modifying several files and running several
             | commands.
             | 
             | It's just a UI difference.
        
               | HarHarVeryFunny wrote:
               | These companies are all for-profit, regardless of what
               | altruistic intent they are trying to spin. Free tier
               | usage and fixed price buffets are obviously not where the
               | profit is, so it's hard to blame them for usage-based
               | pricing for their premium products targeting mass
               | adoption.
        
         | GardenLetter27 wrote:
         | Google is fumbling the bag so badly with the pricing.
         | 
         | Gemini 2.5 Pro is the best model I've used (even better than o3
         | IMO) and yet there's no simple Claude/Cursor like subscription
         | to just get full access.
         | 
         | Nevermind Enterprise users too, where OpenAI has it locked up.
        
           | llm_nerd wrote:
           | I wouldn't dream of thinking anyone has anything "locked up".
           | Certainly not OpenAI which increasingly seems to be on an
           | uphill battle against competitors (including Microsoft who
           | even though they're a partner, are also a competitor) who
           | have other inroads.
           | 
           | Not sure what you mean by "full access", as none of the
           | providers offer unrestricted usage. Pro gets you 2.5 Pro with
           | usage limits. Ultra gets you higher limits + deep think
           | (edit: accidentally put research when I meant think where it
           | spends more resources on an answer) + much more Veo 3 usage.
           | And of course you can use the API usage-billed model.
        
             | tmoertel wrote:
             | The Gemini Pro subscription includes Deep Research and Veo
             | 3; you don't need the pricey Ultra subscription:
             | https://gemini.google/subscriptions/
        
               | magic_hamster wrote:
               | Veo 3 is available only in some regions even for Pro
               | users.
        
             | Spooky23 wrote:
             | In the enterprise space, Microsoft's pain is OpenAI's gain.
             | They are kicking butt.
             | 
             | In enterprises, Microsoft's value proposition is that
             | you're leveraging all of the controls that you already
             | have! Except... who is happy with the state of SharePoint
             | governance?
        
           | bachmeier wrote:
           | > Google is fumbling the bag so badly with the pricing.
           | 
           | In certain areas, perhaps, but Google Workspace at $14/month
           | not only gives you Gemini Pro, but 2 TB of storage, full
           | privacy, email with a custom domain, and whatever else.
           | College students get the AI pro plan for free. I recently
           | looked over all the options for folks like me and my family.
           | Google is obviously the right choice, and it's not
           | particularly close.
        
             | weird-eye-issue wrote:
             | And yet there were still some AI features that were
             | unavailable to workspace users for a few months and you had
             | to use a personal account. I think it's mostly fixed now
             | but that was quite annoying since it was their main AI
             | product (Gemini Studio or whatever, I don't remember for
             | sure)
        
             | Fluorescence wrote:
             | Only "NetworkLM" and "Chat with AI in the Gemini app" in
             | the UK even with "Enterprise Plus". I assume that is not
             | Pro.
        
             | safety1st wrote:
             | I know they raised the price on our Google Workspace
             | Standard subscriptions but don't really know what we got
             | for that aside from Gemini integration into Google Drive
             | etc. Does this mean I can use Gemini CLI using my Workspace
             | entitlement? Do I get Code Assist or anything like that?
             | (But Code Assist seems to be free on a personal G
             | account...?)
             | 
             | Google is fumbling with the marketing/communication - when
             | I look at their stuff I am unclear on what is even
             | available and what I already have, so I can't form an
             | opinion about the price!
        
               | thimabi wrote:
               | > Does this mean I can use Gemini CLI using my Workspace
               | entitlement?
               | 
               | No, you cannot use neither Gemini CLI nor Code Assist via
               | Workspace -- at least not at the moment. However, if you
               | upgrade your Workspace plan, you can use Gemini Advanced
               | via the Web or app interfaces.
        
               | pbowyer wrote:
               | I'm so confused.
               | 
               | Workspace (standard?) customer for over a decade.
        
               | thimabi wrote:
               | Workspace users with the Business Standard plan have
               | access to Gemini Advanced, which is Google's AI offering
               | via the Web interface and mobile apps. This does not
               | include API usage, AI Studio, Gemini CLI, etc. -- all of
               | which are of course available, but must be paid
               | separately or used in the free tier.
               | 
               | In the case of Gemini CLI, it seems Google does not even
               | support Workspace accounts in the free tier. If you want
               | to use Gemini CLI as a Workspace customer, you must pay
               | separately for it via API billing (pay-as-you-go).
               | Otherwise, the alternative is to login with a personal
               | (non-Workspace) account and use the free tier.
        
             | kingsleyopara wrote:
             | Gemini 2.5 pro in workspace was restricted to 32k tokens
             | [0] - do you know if this is still the case?
             | 
             | [0] https://www.reddit.com/r/GoogleGeminiAI/comments/1jrynh
             | k/war...
        
             | jay_kyburz wrote:
             | I'm a workspace subscriber, I get 4-5 questions on Gemini
             | Pro (via gemini.google.com ) before it tells me I'm out of
             | quota and have to switch to flash.
             | 
             | (Update: Oh.. I'm only on business starter, I should be on
             | business standard. need more business!)
        
           | bcrosby95 wrote:
           | They're 'fumbling' because these models are extremely
           | expensive to run. It's also why there's so many products and
           | so much confusion across the whole industry.
        
             | thimabi wrote:
             | An interesting thing is that Google AI offers are much more
             | confusing than the OpenAI ones -- despite the fact that
             | ChatGPT models have one of the worst naming schemes in the
             | industry. Google has confusing model names, plans, API
             | tiers, and even interfaces (AI Studio, Gemini app, Gemini
             | Web, Gemini API, Vertex, Google Cloud, Code Assist, etc.).
             | More often than not, these things overlap with one another,
             | ensuring minimal clarity and preventing widespread usage of
             | Google's models.
        
           | Xmd5a wrote:
           | >Gemini 2.5 Pro is the best model
           | 
           | It's the second time I read this in this thread. May I ask
           | why you think this is the case? And in which domains? I am
           | very satisfied with 2.5 pro when it comes to
           | philosophical/literary analysis, probably because of the
           | super long context I can fill with whole books, and wanted to
           | try Claude Code for the same purpose, but with folders,
           | summaries, etc to make up for the shorter context length.
        
             | GardenLetter27 wrote:
             | I've just found it to be the best in practice, especially
             | for more complicated debugging with code.
             | 
             | But also for text review on posts, etc.
             | 
             | Before Claude had the edge with agentic coding at least,
             | but now even that is slipping.
        
         | ur-whale wrote:
         | > Delightful!
         | 
         | You clearly have never had the "pleasure" to work with a Google
         | product manager.
         | 
         | Especially the kind that were hired in the last 15-ish years.
         | 
         | This type of situation is absolutely typical, and probably one
         | of the more benign thing among the general blight they
         | typically inflict on Google's product offering.
         | 
         | The cartesian product of pricing options X models is an effing
         | nightmare to navigate.
        
         | bachmeier wrote:
         | I had a conversation with Copilot about Copilot offerings.
         | Here's what they told me:
         | 
         | If I Could Talk to Satya...
         | 
         | I'd say:
         | 
         | "Hey Satya, love the Copilots--but maybe we need a Copilot for
         | Copilots to help people figure out which one they need!"
         | 
         | Then I had them print out a table of Copilot plans:
         | 
         | - Microsoft Copilot Free - Github Copilot Free - Github Copilot
         | Pro - Github Copilot Pro+ - Microsoft Copilot Pro (can only be
         | purchased for personal accounts) - Microsoft 365 Copilot (can't
         | be used with personal accounts and can only be purchased by an
         | organization)
        
           | boston_clone wrote:
           | I'd really like to hear your own personal perspective on the
           | topic instead of a regurgitation of an LLM.
        
             | bachmeier wrote:
             | > instead of a regurgitation of an LLM
             | 
             | Copilot is stating the plans for its own services are
             | confusing. Summarizing it as "regurgitation of an LLM"
             | doesn't adequately capture the purpose of the post.
        
         | diegof79 wrote:
         | Google suffers from Microsoft's issues: it has products for
         | almost everything, but its confusing product messaging dilutes
         | all the good things it does.
         | 
         | I like Gemini 2.5 Pro, too, and recently, I tried different AI
         | products (including the Gemini Pro plan) because I wanted a
         | good AI chat assistant for everyday use. But I also wanted to
         | reduce my spending and have fewer subscriptions.
         | 
         | The Gemini Pro subscription is included with Google One, which
         | is very convenient if you use Google Drive. But I already have
         | an iCloud subscription tightly integrated with iOS, so
         | switching to Drive and losing access to other iCloud
         | functionality (like passwords) wasn't in my plans.
         | 
         | Then there is the Gemini chat UI, which is light years behind
         | the OpenAI ChatGPT client for macOS.
         | 
         | NotebookLM is good at summarizing documents, but the experience
         | isn't integrated with the Gemini chat, so it's like constantly
         | switching between Google products without a good integrated
         | experience.
         | 
         | The result is that I end up paying a subscription to Raycast AI
         | because the chat app is very well integrated with other Raycast
         | functions, and I can try out models. I don't get the latest
         | model immediately, but it has an integrated experience with my
         | workflow.
         | 
         | My point in this long description is that by being spread
         | across many products, Google is losing on the UX side compared
         | to OpenAI (for general tasks) or Anthropic (for coding). In
         | just a few months, Google tried to catch up with v0 (Google
         | Stitch), GH Copilot/Cursor (with that half-baked VSCode
         | plugin), and now Claude Code. But all the attempts look like
         | side-projects that will be killed soon.
        
           | Fluorescence wrote:
           | > The Gemini Pro subscription is included with Google One
           | 
           | It's not in Basic, Standard or Premium.
           | 
           | It's in a new tier called "Google AI Pro" which I think is
           | worth inclusion in your catalogue of product confusion.
           | 
           | Oh wait, there's even more tiers that for some reason can't
           | be paid for annually. Weird... why not? "Google AI Ultra" and
           | some others just called Premium again but now include AI. 9
           | tiers, 5 called Premium, 2 with AI in the name but 6 that
           | include Gemini. What a mess.
        
             | scoopdewoop wrote:
             | It is bold to assume these products will even exist in a
             | year
        
             | vexna wrote:
             | It gets even more confusing! If you're on the "Premium"
             | plans (i.e the the old standard "Google One" plans) and
             | upgrade to >=5TB storage, your "Premium" plan starts
             | including all the features of "Google AI Pro".
             | 
             | Tip: If you do annual billing for "Premium (5 TB)", you end
             | up paying $21/month for 5TB of storage and the same AI
             | features of "Google AI pro (2TB)"; which is only $1/month
             | more than doing "Google AI Pro (2 TB)" (which only has
             | monthly billing)
        
           | krferriter wrote:
           | I subscribed to Google One through the Google Photos iOS app
           | because I wanted photos I took on my iPhone to be backed up
           | to Google. When I switched to Android and went into Google
           | One to increase my storage capacity in my Google account, I
           | found that it was literally impossible, because the
           | subscription was tied to my iCloud account. I even got on a
           | line with Google Support about it and they told me yeah it's
           | not even possible on their side to disconnect my Google One
           | subscription from Apple. I had to wait for the iCloud
           | subscription to Google One to end, and then I was able to go
           | into Google One and increase my storage capacity.
        
             | bilalq wrote:
             | The root problem here lies with Apple. It's so frustrating
             | how they take a 30% cut for the privilege of being unable
             | to actually have a relationship with your customers. Want
             | to do a partial refund (or a refund at all)? Want to give
             | one month free to an existing subscriber? Tough luck. Your
             | users are Apple's customers, not yours.
        
               | kridsdale3 wrote:
               | I implemented Google One integration in an iOS app. This
               | comment chain is accurate. Users want to pay with Apple
               | (like other app subscriptions) but then your "account" is
               | inside their payments world. Which is super confusing
               | since users (rightly) think they are dealing with their
               | Google account.
        
               | jiggawatts wrote:
               | Same as a shopping centre, clothing retailer, or any
               | other non-bazaar marketplace with its own brand and
               | transaction processing.
               | 
               | Apple is selling you a huge lucrative market.
               | 
               | Customers buy Apple's curated marketplace.
               | 
               | Apple takes a cut for being in the middle and enabling
               | all of this.
               | 
               | Believe me, I would _never_ pay for most of the apps that
               | I _did_ pay for via Apple if it wasn't via their
               | marketplace and their _consumer protections_.
               | 
               | There is no counterfactual scenario where you and
               | millions(!) of other ISVs get 100% of the same money
               | without Apple.
               | 
               | What's difficult to understand about these business
               | relationships?
        
               | cma wrote:
               | > Apple takes a cut for being in the middle and enabling
               | all of this.
               | 
               | Enabling this like Ticketmaster enables selling tickets.
               | 
               | In ticketmaster's case I believe they give kickbacks and
               | lucrative exclusive contracts with large venues, to
               | squeeze smaller ones, maybe making whole tours use it but
               | only kicking back to the biggest or select venues on the
               | tour I think.
               | 
               | Apple sometimes does special deals and special rules with
               | important providers, among many other tactics behind
               | their moat. All single signons must also offer apple
               | single sign-on, for instance, and they have even disabled
               | access to customer accounts using their single sign-on
               | for unrelated business disputes, though they walked it
               | back in the big public example I'm aware of, the threat
               | is there if you go against them in any way.
        
         | guestbest wrote:
         | You'd think with all this AI tooling they'd be able to organize
         | better, but I think that the AI Age will be a very messy one
         | with messaging and content
        
         | tmaly wrote:
         | I was just trying to figure out if I get anything as a pro
         | user. Thank you, you answered my question.
         | 
         | This is very confusing how they post about this on X, you would
         | think you get additional usage. Messaging is very confusing.
        
         | upcoming-sesame wrote:
         | I think Pro is for regular folks, not specifically for
         | programmers.
         | 
         | I also have a pro subscription and wish I could get an API key
         | with that with generous quota as well but pro is just for
         | "consumers" using Gemini app I guess
        
         | fhinkel wrote:
         | That's valuable feedback and we're taking it to heart.
        
       | rhodysurf wrote:
       | I neeeed this google login method in sst's opencode now haha
        
       | ZeroCool2u wrote:
       | Ugh, I really wish this had been written in Go or Rust. Just
       | something that produces a single binary executable and doesn't
       | require you to install a runtime like Node.
        
         | iainmerrick wrote:
         | Looks like you could make a standalone executable with Bun
         | and/or Deno:
         | 
         | https://bun.sh/docs/bundler/executables
         | 
         | https://docs.deno.com/runtime/reference/cli/compile/
         | 
         | Note, I haven't checked that this actually works, although if
         | it's straightforward Node code without any weird extensions it
         | should work in Bun at least. I'd be curious to see how the exe
         | size compares to Go and Rust!
        
           | ZeroCool2u wrote:
           | Yeah, this just seems like a pain in the ass that could've
           | been easily avoided.
        
             | iainmerrick wrote:
             | From my perspective, I'm totally happy to use pnpm to
             | install and manage this. Even if it were a native tool, NPM
             | might be a decent distribution mechanism (see e.g.
             | esbuild).
             | 
             | Obviously everybody's requirements differ, but Node seems
             | like a pretty reasonable platform for this.
        
               | danielbln wrote:
               | Also throwing Volta (written in Rust, because of course
               | it is) into the ring. It's the uv of the Node world.
        
             | jstummbillig wrote:
             | It feels like you are creating a considerable fraction of
             | the pain by taking offense with simply using npm.
        
               | evilduck wrote:
               | As a longtime user of NPM but overall fan of JS and TS
               | and even its runtimes, NPM is a dumpster fire and forcing
               | end users to use it is brittle, lazy, and hostile. A
               | small set of dependencies will easily result in thousands
               | (if not tens of thousands) of transitive dependency files
               | being installed.
               | 
               | If you have to run end point protection that will blast
               | your CPU with load and it makes moving or even deleting
               | that folder needlessly slow. It also makes the hosting
               | burden of NPM (n _users) who must all install
               | dependencies instead of (n_ CI instances), which isn't
               | very nice to our hosts. Dealing with that once during
               | your build phase and then packaging that mess up is the
               | nicer way to go about distributing things depending on
               | NPM to end users.
        
               | frollogaston wrote:
               | I ran the npm install command in their readme, it took a
               | few seconds, then it worked. Subsequent runs don't have
               | to redownload stuff. Where is the painful part?
        
           | JimDabell wrote:
           | I was going to say the same thing, but they couldn't resist
           | turning the project into a mess of build scripts that hop
           | around all over the place manually executing node.
        
             | iainmerrick wrote:
             | Oh, man!
             | 
             | I guess it needs to start various processes for the MCP
             | servers and whatnot? Just spawning another Node is the easy
             | way to do that, but a bit annoying, yeah.
        
           | buildfocus wrote:
           | You can also do this natively with Node, since v18:
           | https://nodejs.org/api/single-executable-
           | applications.html#s...
        
           | tln wrote:
           | A Bun "hello world" is 58Mb
           | 
           | Claude also requires npm, FWIW.
        
             | iainmerrick wrote:
             | What's a typical Go static binary size these days? Googling
             | around, I'm seeing wildly different answers -- I think a
             | lot of them are outdated.
        
               | MobiusHorizons wrote:
               | It depends a lot on what the executable does. I don't
               | know the hello world size, but anecdotally I remember
               | seeing several go binaries in the single digit megabyte
               | range. I know the code size is somewhat larger than one
               | might expect because go keeps some type info around for
               | reflection whether you use it or not.
        
               | iainmerrick wrote:
               | Ah, good point. I was just wondering about the fixed
               | overhead of the runtime system -- mainly the garbage
               | collector, I assume.
        
               | frollogaston wrote:
               | The Golang runtime is big enough by itself that it makes
               | a real difference from some WASM applications, and people
               | are using Rust instead purely because of that.
        
             | sitkack wrote:
             | That is point not a line. An extra 2MB of source is
             | probably a 60MB executable, as you are measuring the
             | runtime size. Two "hello worlds" are 116MB? Who measures
             | executables in Megabits?
        
             | quotemstr wrote:
             | > A Bun "hello world" is 58Mb
             | 
             | I've forgotten how to count that low.
        
         | fhinkel wrote:
         | Ask Gemini CLI to re-write itself in your preferred language
        
           | ZeroCool2u wrote:
           | Unironically, not a bad idea.
        
           | AJ007 wrote:
           | Contest between Claude Code and Gemini CLI, who rewrites it
           | faster/cheaper/better?
        
         | geodel wrote:
         | My thoughts exactly. Neither Rust not Go, not even C/C++ which
         | I could accept if there were some native OS dependencies. Maybe
         | this is a hint on who could be its main audience.
        
           | ur-whale wrote:
           | > Maybe this is a hint on who could be its main audience.
           | 
           | Or a hint about the background of the folks who built the
           | tool.
        
         | qsort wrote:
         | Projects like this have to update frequently, having a
         | mechanism like npm or pip or whatever to automatically handle
         | that is probably easier. It's not like the program is doing
         | heavy lifting anyway, unless you're committing outright
         | programming felonies there shouldn't be any issues on modern
         | hardware.
         | 
         | It's the only argument I can think of, something like Go would
         | be goated for this use case in principle.
        
           | ZeroCool2u wrote:
           | I feel like Cargo or Go Modules can absolutely do the same
           | thing as the mess of build scripts they have in this repo
           | perfectly well and arguably better.
        
           | koakuma-chan wrote:
           | If you use Node.js your program is automatically too slow for
           | a CLI, no matter what it actually does.
        
             | frollogaston wrote:
             | So are you saying the Gemini CLI is too slow, and Rust
             | would remedy that?
        
           | masklinn wrote:
           | > having a mechanism like npm or pip or whatever to
           | automatically handle that is probably easier
           | 
           | Re-running `cargo install <crate>` will do that. Or install
           | `cargo-update`, then you can bulk update everything.
           | 
           | And it works hella better than using pip in a global python
           | install (you really want pipx/uvx if you're installing python
           | utilities globally).
           | 
           | IIRC you can install Go stuff with `go install`, dunno if you
           | can update via that tho.
        
             | StochasticLi wrote:
             | This whole thread is a great example of the developer vs.
             | user convenience trade-off.
             | 
             | A single, pre-compiled binary is convenient for the user's
             | first install only.
        
               | masklinn wrote:
               | Unless you build self-updating in, which Google certainly
               | has experience in, in part to avoid clients lagging
               | behind. Because aside from being a hindrance (refusing to
               | start and telling the client to update) there's no way
               | you can actually force them to run an upgrade command.
        
               | MobiusHorizons wrote:
               | How so? Doesn't it also make updates pretty easy? Have
               | the precompiled binary know how to download the new
               | version. Sure there are considerations for backing up the
               | old version, but it's not much work, and frees you up
               | from being tied to one specific ecosystem
        
               | JimDabell wrote:
               | I don't think that's true. For instance, uv is a single,
               | pre-compiled binary, and I can just run `uv self update`
               | to update it to the latest version.
        
             | re-thc wrote:
             | > Re-running `cargo install <crate>` will do that. Or
             | install `cargo-update`, then you can bulk update
             | everything.
             | 
             | How many developers have npm installed vs cargo? Many won't
             | even know what cargo is.
        
               | riskable wrote:
               | Everyone in the cult knows what cargo is.
        
           | mpeg wrote:
           | You'd think that, but a globally installed npm package is
           | annoying to update, as you have to do it manually and I very
           | rarely need to update other npm global packages so at least
           | personally I always forget to do it.
        
           | frollogaston wrote:
           | I don't think that's the main reason. Just installed this and
           | peaked in node_nodules. There are a lot of random deps,
           | probably for the various local capabilities, and I'll bet it
           | was easier to find those libs in the Node ecosystem than
           | elsewhere.
           | 
           | react-reconciler caught my eye. The Gemini CLI told me "The
           | Gemini CLI uses ink to create its interactive command-line
           | interface, and ink in turn uses react-reconciler to render
           | React components to the terminal"
           | 
           | That and opentelemetry, whatever the heck that is
        
           | js2 wrote:
           | [delayed]
        
         | i_love_retros wrote:
         | This isn't about quality products, it's about being able to say
         | you have a CLI tool because the other ai companies have one
        
           | clbrmbr wrote:
           | Fast following is a reasonable strategy. Anthropic provided
           | the existence proof. It's an immensely useful form factor for
           | AI.
        
             | closewith wrote:
             | Yeah, it would be absurd to avoid a course of action proven
             | productive by a competitor.
        
             | mike_hearn wrote:
             | The question is whether what makes it useful is actually
             | being in the terminal (limited, glitchy, awkward
             | interaction) or whether it's being able to run next to
             | files on a remote system. I suspect the latter.
        
           | behnamoh wrote:
           | > This isn't about quality products, it's about being able to
           | say you have a CLI tool because the other ai companies have
           | one
           | 
           | Anthropic's Claude Code is also installed using npm/npx.
        
           | rs186 wrote:
           | Eh, I can't see how your comment is relevant ti the parent
           | thread. Creating a CLI in Go is barely more complicated than
           | JS. Rust, probably, but people aren't asking for that.
        
           | frollogaston wrote:
           | Writing it in Golang or Rust doesn't really make it better
        
         | buildfocus wrote:
         | Node can also produce a single binary executable:
         | https://nodejs.org/api/single-executable-applications.html
        
         | ur-whale wrote:
         | > and doesn't require you to install a runtime like Node.
         | 
         | My exact same reaction when I read the install notes.
         | 
         | Even python would have been better.
         | 
         | Having to install that Javascript cancer on my laptop just to
         | be able to try this, is a huge no.
        
         | jart wrote:
         | See gemmafile which gives you an airgapped version of gemini
         | (which google calls gemma) that runs locally in a single file
         | without any dependencies.
         | 
         | https://huggingface.co/jartine/gemma-2-27b-it-llamafile
        
         | quotemstr wrote:
         | Language choice is orthogonal to distribution strategy. You
         | _can_ make single-file builds of JavaScript (or Python or
         | anything) programs! It 's just a matter of packaging, and there
         | are packaging solutions for both Bun and Node. Don't blame the
         | technology for people choosing not to use it.
        
         | corysama wrote:
         | Meanwhile, https://analyticsindiamag.com/global-tech/openai-is-
         | ditching...
         | 
         | I really don't mind either way. My extremely limited experience
         | with Node indicates they have installation, packaging and
         | isolation polished very well.
        
           | frollogaston wrote:
           | Node and Rust both did packaging well, I think Golang too.
           | It's a disaster in Python.
        
       | qudat wrote:
       | Why would someone use this over aider?
        
         | adamtaylor_13 wrote:
         | Disclaimar: I haven't used aider in probably a year. I found
         | Aider to require much more understanding to use properly.
         | Claude code _just works_, more or less out of the box. Assuming
         | the Gemini team took cues from CC--I'm guessing it's more user-
         | friendly than Aider.
         | 
         | Again, I haven't used aider in a while so perhaps that's not
         | the case.
        
         | bananapub wrote:
         | Claude Code and OpenAI Codex and presumably this are much much
         | more aggressive about generating work for themselves than Aider
         | is.
         | 
         | For complicated changes Aider is much more likely to stop and
         | need help, whereas Claude Code will just go and go and end up
         | with something.
         | 
         | Whether that's worth the different economic model is up to you
         | and your style and what you're working on.
        
       | fhinkel wrote:
       | I played around with it to automate GitHub tasks for me (tagging
       | and sorting PRs and stuff). Sometimes it needs a little push to
       | use the API instead of web search, but then it even installs the
       | right tools (like gh) for you. https://youtu.be/LP1FtpIEan4
        
       | cperry wrote:
       | Hi - I work on this. Uptake is a steep curve right now, spare a
       | thought for the TPUs today.
       | 
       | Appreciate all the takes so far, the team is reading this thread
       | for feedback. Feel free to pile on with bugs or feature requests
       | we'll all be reading.
        
         | elashri wrote:
         | Hi, Thanks for this work.
         | 
         | currently it seems these are the CLI tools available. Is it
         | possible to extend or actually disable some of these tools (for
         | various reasons)?
         | 
         | > Available Gemini CLI tools:                   - ReadFolder
         | - ReadFile         - SearchText         - FindFiles         -
         | Edit         - WriteFile         - WebFetch         -
         | ReadManyFiles         - Shell         - Save Memory         -
         | GoogleSearch
        
           | _ryanjsalva wrote:
           | I also work on the product. You can extend the tools with
           | MCP. https://github.com/google-gemini/gemini-
           | cli/blob/main/docs/t...
        
             | silverlake wrote:
             | I tried to get Gemini CLI to update itself using the MCP
             | settings for Claude. It went off the rails. I then fed it
             | the link you provided and it correctly updates it's
             | settings file. You might mention the settings.json file in
             | the README.
        
             | ericb wrote:
             | Feedback: A command to add MCP servers like claude code
             | offers would be handy.
        
               | _ryanjsalva wrote:
               | 100% - It's on our list!
        
           | cperry wrote:
           | I had to ask Gemini CLI to remind myself ;) but you can add
           | this into settings.json:
           | 
           | { "excludeTools": ["run_shell_command", "write_file"] }
           | 
           | but if you ask Gemini CLI to do this it'll guide you!
        
           | bdmorgan wrote:
           | I also work on the product :-)
           | 
           | You can also extend with the Extensions feature -
           | https://github.com/google-gemini/gemini-
           | cli/blob/main/docs/e...
        
           | SafeDusk wrote:
           | Pretty close to what I discovered is essential in
           | https://github.com/aperoc/toolkami, 7 tools will cover
           | majority of the use cases.
        
         | ebiester wrote:
         | So, as a member of an organization who pays for google
         | workspace with gemini, I get the message `GOOGLE_CLOUD_PROJECT
         | environment variable not found. Add that to your .env and try
         | again, no reload needed!`
         | 
         | At the very least, we need better documentation on how to get
         | that environment variable, as we are not on GCP and this is not
         | immediately obvious how to do so. At the worst, it means that
         | your users paying for gemini don't have access to this where
         | your general google users do.
        
           | cperry wrote:
           | https://github.com/google-gemini/gemini-
           | cli/blob/main/docs/c...
        
             | ebiester wrote:
             | While I get my organization's IT department involved, I do
             | wonder why this is built in a way that requires more work
             | for people already paying google money than a free user.
        
               | rtaylorgarlock wrote:
               | @ebiester, my wife's maiden name is E. Biester. I did a
               | serious double take. Got you on X :)
        
             | Maxious wrote:
             | I'd echo that having to get the IT section involved to
             | create a google cloud project is not great UX when I have
             | access to NotebookLM Pro and Gemini for Workplace already.
             | 
             | Also this doco says GOOGLE_CLOUD_PROJECT_ID but the actual
             | tool wants GOOGLE_CLOUD_PROJECT
        
               | cperry wrote:
               | PR in flight to update docs (if not already in)
        
           | thimabi wrote:
           | I believe Workspace users have to pay a separate subscription
           | to use the Gemini CLI, the so-called "Gemini for Google
           | Cloud", which starts at an additional 19 dollars per month
           | [^1]. If that's really the case, it's very disappointing to
           | me. I expected access to Gemini CLI to be included in the
           | normal Workspace subscription.
           | 
           | [^1]: https://console.cloud.google.com/marketplace/product/go
           | ogle/...
        
             | cperry wrote:
             | [edit] all lies - I got my wires crossed, free tier for
             | Workspace isn't yet supported. sorry. you need to set the
             | project and pay. this is WIP.
             | 
             | Workspace users [edit: cperry was wrong] can get the free
             | tier as well, just choose "More" and "Google for Work" in
             | the login flow.
             | 
             | It has been a struggle to get a simple flow that works for
             | all users, happy to hear suggestions!
        
               | rtaylorgarlock wrote:
               | I can imagine. Y'all didn't start simple like some of
               | your competitors; 'intrapraneurial' efforts in existing
               | contexts like yours come with well-documented struggles.
               | Good work!
        
               | Workaccount2 wrote:
               | Just get a pop-up or something in place to make it dead
               | simple, because workspace users are probably the core
               | users of the product.
        
               | thimabi wrote:
               | Thanks for your clarification. I've been able to set up
               | Gemini CLI with my Workspace account.
               | 
               | Just a heads-up: your docs about authentication on Github
               | say to place a GOOGLE_CLOUD_PROJECT_ID as an environment
               | variable. However, what the Gemini CLI is actually
               | looking for, from what I can tell, is a
               | GOOGLE_CLOUD_PROJECT environment variable with the name
               | of a project (rather than its ID). You might want to fix
               | that discrepancy between code and docs, because it might
               | confuse other users as well.
               | 
               | I don't know what constraints made you all require a
               | project ID or name to use the Gemini CLI with Workspace
               | accounts. However, it would be far easier if this
               | requirement were eliminated.
        
               | cperry wrote:
               | sorry, I was wrong about free tier - I've edited above.
               | this is WIP.
               | 
               | noted on documentation, there's a PR in flight on this.
               | also found some confusion around gmail users who are part
               | of the developer program hitting issues.
        
               | thimabi wrote:
               | > free tier for Workspace isn't yet supported. sorry. you
               | need to set the project and pay.
               | 
               | Well, I've just set up Gemini CLI with a Workspace
               | account project in the free tier, and it works apparently
               | for free. Can you explain whether billing for that has
               | simply not been configured yet, or where exactly billing
               | details can be found?
               | 
               | For reference, I've been using this panel to keep track
               | of my usage in the free tier of the Gemini API, and it
               | has not been counting Gemini CLI usage thus far: https://
               | console.cloud.google.com/apis/api/generativelanguage...
               | 
               | Unfortunately all of that is pretty confusing, so I'll
               | hold off using Gemini CLI until everything has been
               | clarified.
        
               | bachmeier wrote:
               | > noted on documentation, there's a PR in flight on this.
               | also found some confusion around gmail users who are part
               | of the developer program hitting issues.
               | 
               | Maybe you have access to an AI solution for this.
        
             | 827a wrote:
             | Having played with the gemini-cli here for 30 minutes, so I
             | have no idea but best guess: I believe that if you auth
             | with a Workspace account it routes all the requests through
             | the GCP Vertex API, which is why it needs a
             | GOOGLE_CLOUD_PROJECT env set, and that also means usage-
             | based billing. I don't think it will leverage any
             | subscriptions the workspace account might have (are there
             | still gemini subscriptions for workspace? I have no idea. I
             | thought they just raised everyone's bill and bundled it in
             | by default. What's Gemini Code Assist Standard or
             | Enterprise? I have no idea).
        
           | fooey wrote:
           | workspace accounts always seems like an unsupported mess at
           | google, which is a very strange strategy
        
         | carraes wrote:
         | it would be cool to work with my google ai pro sub
        
           | cperry wrote:
           | working on it
        
         | nojito wrote:
         | How often did you use gemini-cli to build on gemini-cli?
        
           | _ryanjsalva wrote:
           | We started using Gemini CLI to build itself after about week
           | two. If I had to guess, I'd say better than 80% of the code
           | was written with Gemini CLI. Honestly, once we started using
           | the CLI, we started experimenting a lot more and building
           | waaaaay faster.
        
           | bdmorgan wrote:
           | 100% of the time
        
         | mkagenius wrote:
         | Hi - I integrated Apple Container on M1 to run[1] the code
         | generated by Gemini CLI. It works great!
         | 
         | 1. CodeRunner -
         | https://github.com/BandarLabs/coderunner/tree/main?tab=readm...
        
           | cperry wrote:
           | <3 amazing
        
         | javier123454321 wrote:
         | one piece of feedback. Please do neovim on top of vim or have a
         | way to customize the editor beyond your list.
        
           | newnimzo wrote:
           | someone has already sent out a PR for this!
           | https://github.com/google-gemini/gemini-cli/pull/1448
        
             | cperry wrote:
             | bless them
        
         | bsenftner wrote:
         | Thank you for your work on this. I spent the afternoon
         | yesterday trying to convert an algorithm written in ruby (which
         | I do not know) to vanilla JavaScript. It was a comedy of
         | failing nonsense as I tried to get gpt-4.1 to help, and it just
         | led me down pointless rabbit holes. I installed Gemini CLI out
         | of curiosity, pointed it at the Ruby project, and it did the
         | conversion from a single request, total time from "think I'll
         | try this" to it working was 5 minutes. Impressed.
        
           | cperry wrote:
           | <3 love to hear it!
        
         | streb-lo wrote:
         | Is there a reason all workspace accounts need a project ID? We
         | pay for gemini pro for our workspace accounts but we don't use
         | GCP or have a project ID otherwise.
        
           | thimabi wrote:
           | The reason is that billing is separate, via the paid tier of
           | the API. Just a few minutes ago, I was able to test Gemini
           | CLI using a Workspace account after setting up a project in
           | the free tier of the API. However, that seems to have been a
           | bug on their end, because I now get 403 errors (Forbidden)
           | with that configuration. The remaining options are either to
           | set up billing for the API or use a non-Workspace Google
           | account.
        
           | cperry wrote:
           | the short answer is b/c one of our dependencies requires it
           | and hasn't resolved it.
        
         | danavar wrote:
         | Is there a way to instantly, quickly prompt it in the terminal,
         | without loading the full UI? Just to get a short response
         | without filling the terminal page.
         | 
         | like to just get a short response - for simple things like
         | "what's a nm and grep command to find this symbol in these 3
         | folders". I use gemini alot for this type of thing already
         | 
         | Or would that have to be a custom prompt I write?
        
           | peterldowns wrote:
           | I use `mods` for this https://github.com/charmbracelet/mods
           | 
           | other people use simon willison's `llm` tool
           | https://github.com/simonw/llm
           | 
           | Both allow you to switch between models, send short prompts
           | from a CLI, optionally attach some context. I prefer mods
           | because it's an easier install and I never need to worry
           | about Python envs and other insanity.
        
             | indigodaddy wrote:
             | Didn't know about mods, looks awesome.
        
           | cperry wrote:
           | -p is your friend
        
           | hiAndrewQuinn wrote:
           | gemini --prompt "Hello"
        
           | irthomasthomas wrote:
           | If you uv install llm Then grab my shelllm scripts
           | github.com/irthomasthomas/shelllm and source them in your
           | terminal then you can use premade prompt functions like shelp
           | "what's a nm and grep command to find this symbol in these 3
           | folders" -m gemini-pro
           | 
           | There's also wrappers that place the command directly in your
           | terminal prompt if you run shelp-c
        
         | conception wrote:
         | Google Gemini Google Gemini Ultra AI Studio Vertex AI Notebook
         | LLM Jules
         | 
         | All different products doing the sameish thing. I don't know
         | where to send users to do anything. They are all licensed
         | differently. Bonkers town.
        
         | GenerWork wrote:
         | I'm just a hobbyist, but I keep getting the error "The code
         | change produced by Gemini cannot be automatically applied. You
         | can manually apply the change or ask Gemini to try again". I
         | assume this is because the service is being slammed?
         | 
         | Edit: I should mention that I'm accessing this through Gemini
         | Code Assist, so this may be something out of your wheelhouse.
        
           | cperry wrote:
           | odd, haven't seen that one - you might file an issue
           | https://github.com/google-gemini/gemini-cli/issues
           | 
           | I don't think that's capacity, you should see error codes.
        
         | taupi wrote:
         | Right now authentication doesn't work if you're working on a
         | remote machine and try to authenticate with Google, FYI. You
         | need an alternate auth flow that gives the user a link and lets
         | them paste a key in (this is how Claude Code does it).
        
           | cperry wrote:
           | correct, sorry, known issue
        
         | ciwchris wrote:
         | Using the Gemini CLI the first thing I tried to do was "Create
         | GEMINI.md files to customize your interactions with Gemini."
         | The command ran for about a minute before receiving a too many
         | requests error.
         | 
         | > You exceeded your current quota, please check your plan and
         | billing details. For more information on this error, head to:
         | https://ai.google.dev/gemini-api/docs/rate-limits.
         | 
         | Discouraging
        
           | fhinkel wrote:
           | Super weird! I've been using it the last week, and never hit
           | the quota limit for free users. We're having some capacity
           | issues right now, but that should not affect the quota. Would
           | love it if you can try tomorrow or so again!
        
             | jrbuhl wrote:
             | It's happening to me with API Key usage. I assume there are
             | no Terms of Use protections on our data unless we access
             | Gemini CLI in a paid manner?                    [API Error:
             | {"error":{"message":"{\n  \"error\": {\n    \"code\":
             | 429,\n    \"message\": \"Resource has been exhausted (e.g.
             | check quota).\",\n           \"status\":
             | \"RESOURCE_EXHAUSTED\"\n  }\n}\n","code":429,"status":"Too
             | Many Requests"}}]         Please wait and try again later.
             | To increase your limits, request a quota increase through
             | AI Studio, or switch to another /auth method
             | 
             | However, in the Google cloud console I don't see any of the
             | quotas going above their default limits.
        
               | cryptoz wrote:
               | Yeah this exact thing is happening to me also. Minutes of
               | runtime and only errors. I guess I'll try again later? I
               | have billing up and I'm Tier 1. Wouldn't expect to hit
               | limits like this on the first prompt.
        
               | klipklop wrote:
               | Same here. I wish API users got priority over free Google
               | account users...Guess I will wait until ~5pm when people
               | go home for the day to try it.
        
         | hiAndrewQuinn wrote:
         | Feature request! :)
         | 
         | I'm a Gemini Pro subscriber and I would love to be able to use
         | my web-based chat resource limits with, or in addition to, what
         | is offered here. I have plenty of scripts that are essentially
         | "Weave together a complex prompt I can send to Gemini Flash to
         | instantly get the answer I'm looking for and xclip it to my
         | clipboard", and this would finally let me close the last step
         | in that scripts.
         | 
         | Love what I'm seeing so far!
        
           | cperry wrote:
           | working on it!
        
         | imjonse wrote:
         | Hi. It is unclear from the README whether the free limits apply
         | also when there's an API key found in the environment - not
         | explicitly set for this tool - and there is no login
         | requirement.
        
           | cperry wrote:
           | if you explicitly select the sign in with google you'll get
           | the free tier - it won't use your API key.
        
         | Freedom2 wrote:
         | Pointed it at a project directory and asked it to find and fix
         | an intentionally placed bug without referencing any filenames.
         | It seemed to struggle finding any file or constructing a
         | context about the project unless specifically asked. FWIW,
         | Claude Code tries to build an 'understanding' of the codebase
         | when given the same prompt. For example, it struggled when I
         | asked to "fix the modal logic" but nothing was specifically
         | called a modal.
         | 
         | Is the recommendation to specifically ask "analyze the
         | codebase" here?
        
         | yomismoaqui wrote:
         | I have been evaluating other tools like Amp (from Sourcegraph)
         | and when trying Gemini Cli on VsCode I found some things to
         | improve:
         | 
         | - On a new chat I have to re-approve things like executing "go
         | mod tidy", "git", write files... I need to create a new chat
         | for each feature, (maybe an option to clear the current chat on
         | VsCode would work)
         | 
         | - I have found some problems with adding some new endpoint on
         | an example Go REST server I was trying it on, it just deleted
         | existing endpoints on the file. Same with tests, it deleted
         | existing tests when asking to add a test. For comparison I
         | didn't find these problems when evaluating Amp (uses Claude 4)
         | 
         | Overall it works well and hope you continue with polishing it,
         | good job!!
        
           | cperry wrote:
           | thank you kind stranger!
        
         | sandGorgon wrote:
         | i have a Google AI Pro subscription - what kind of
         | credits/usage/allowance do i get towards gemini cli ?
        
           | cperry wrote:
           | not connected yet
        
         | kingsleyopara wrote:
         | Thanks so much for this! I'd really appreciate a more consumer
         | oriented subscription offering, similar to Claude Max, that
         | combines Gemini CLI (with IP compliance) and the Gemini app
         | (extra points for API access too!).
        
           | cperry wrote:
           | working on it
        
           | upcoming-sesame wrote:
           | this seems to be the number one topic in this thread
        
         | Xmd5a wrote:
         | Hey the interface on YouTube loads super slowly for me. The
         | change appeared a few months ago. I'm not talking about the
         | video streams, but the ajax loading of the UI. Whether it's
         | opening a new youtube tab or navigating between videos within
         | youtube, it takes forever. Chrome/Safari -> same deal, 30
         | seconds delays is what I observe. My macbook pro is 10 years
         | old, the problem doesn't appear on more recent hardware, but
         | still youtube shouldn't be the slowest website to load on my
         | machine. I can load spotify.com just fine in about 5 seconds.
        
         | jadbox wrote:
         | Does it have LSP (language server) support? How should I think
         | of this as different from Aider?
        
         | nprateem wrote:
         | Please, for the love of God, stop your models always answering
         | with essays or littering code with tutorial style comments.
         | Almost every task devolves into "now get rid of the comments".
         | It seems impossible to prevent this.
         | 
         | And thinking is stupid. "Show me how to generate a random
         | number in python"... 15s later you get an answer.
        
           | msgodel wrote:
           | They have to do that, it's how they think. If they were
           | trained not to do that they'd produce lower quality code.
        
           | mpalmer wrote:
           | Take some time to understand how the technology works, and
           | how you can configure it yourself when it comes to thinking
           | budget. None of these problems sound familiar to me as a
           | frequent user of LLMs.
        
         | kridsdale3 wrote:
         | Congrats on your success. May you all be promoted and may your
         | microkitchens be stocked.
        
       | meetpateltech wrote:
       | Key highlights from blog post and GitHub repo:
       | 
       | - Open-source (Apache 2.0, same as OpenAI Codex)
       | 
       | - 1M token context window
       | 
       | - Free tier: 60 requests per minute and 1,000 requests per day
       | (requires Google account authentication)
       | 
       | - Higher limits via Gemini API or Vertex AI
       | 
       | - Google Search grounding support
       | 
       | - Plugin and script support (MCP servers)
       | 
       | - Gemini.md file for memory instruction
       | 
       | - VS Code integration (Gemini Code Assist)
        
       | i_love_retros wrote:
       | Boring. Any non llm news?
        
       | ape4 wrote:
       | In the screenshot it's asked about Gemini CLI and it says its
       | going to search the web and read the README.md - what ever did we
       | do before AI /s
        
       | titusblair wrote:
       | Nice work excited to use it!
        
       | andrewstuart wrote:
       | I really wish these AI companies would STOP innovating until they
       | work out how to let us "download all files" on the chat page.
       | 
       | We are now three years into the AI revolution and they are still
       | forcing us to copy and paste and click click crazy to get the
       | damn files out.
       | 
       | STOP innovating. STOP the features.
       | 
       | Form a team of 500 of your best developers. Allocate a year and a
       | billion dollar budget.
       | 
       | Get all those Ai super scientists into the job.
       | 
       | See if you can work out "download all files". A problem on the
       | scale of AGI or Dark Matter, but one day google or OpenAI will
       | crack the problem.
        
         | nojito wrote:
         | This edits the files directly. Using a chat hasn't been an
         | optimal workflow for a while now.
        
         | raincole wrote:
         | What does this even mean lol. "Download all files"...?
        
         | Workaccount2 wrote:
         | It seems you are still using the web interface.
         | 
         | When you hop over to platforms that use the API, the files get
         | written/edited in situ. No copy/pasting. No hunting for where
         | to insert edited code.
         | 
         | Trust me it's a total game changer to switch. I spent so much
         | time copy/pasting before moving over.
        
         | indigodaddy wrote:
         | https://github.com/robertpiosik/CodeWebChat
        
       | matltc wrote:
       | Sweet, I love Claude and was raring to try out their CLI that
       | dropped a few days ago, but don't have a sub. This looks to be
       | free
        
       | lazarie wrote:
       | "Failed to login. Ensure your Google account is not a Workspace
       | account."
       | 
       | Is your vision with Gemini CLI to be geared only towards non-
       | commercial users? I have had a workspace account since GSuite and
       | have been constantly punished for it by Google offerings all I
       | wanted was gmail with a custom domain and I've lost all my
       | youtube data, all my fitbit data, I cant select different
       | versions of some of your subscriptions (seemingly completely
       | random across your services from a end-user perspective), and now
       | as a Workspace account I cant use Gemini CLI for my work, which
       | is software development. This approach strikes me as actively
       | hostile towards your loyal paying users...
        
         | GlebOt wrote:
         | Have you checked the https://github.com/google-gemini/gemini-
         | cli/blob/main/docs/c... ? It has a section for workspace
         | accounts.
        
           | Aeolun wrote:
           | It shouldn't be that hard. Logically it should just be,
           | signin and go.
        
         | raincole wrote:
         | It seems that you need to set up an env variable called
         | GOOGLE_CLOUD_PROJECT https://github.com/google-gemini/gemini-
         | cli/issues/1434
         | 
         | ... and other stuff.
        
           | LouisvilleGeek wrote:
           | The barrier to use this project is maddening. I went through
           | all of the setup instructions and getting the workspace error
           | for a personal gmail account.
           | 
           | Googlers, we should not have to do all of this setup and prep
           | work for a single account. Enterprise I get, but for a single
           | user? This is insufferable.
        
         | zxspectrum1982 wrote:
         | Same here.
        
       | jsnell wrote:
       | What's up with printing lame jokes every few seconds? The last
       | thing I want from a tool like this is my eye to be drawn to the
       | window all the time as if something had changed and needs my
       | action. (Having a spinner is fine, having changing variable
       | length text isn't.)
        
         | asadm wrote:
         | can disable it from accessibility setting. it does show model
         | thinking instead of joke when that's available.
        
           | jsnell wrote:
           | Thanks, but where are those accessibility settings? /help
           | shows nothing related to settings other than auth and theme,
           | there's no related flags, and there's a
           | ~/.gemini/settings.json that contains just the auth type.
           | 
           | No mention of accessibility in https://github.com/google-
           | gemini/gemini-cli/blob/0915bf7d677... either
        
       | phillipcarter wrote:
       | An aside, but with Claude Code and now Gemini instrumenting
       | operations with OpenTelemetry by default, this is very cool.
        
       | Jayakumark wrote:
       | Whether any CLI interactions are used to train the model or no ?
        
         | imiric wrote:
         | Ha. It would be naive to think that a CLI tool from an adtech
         | giant won't exploit as much data as it can collect.
        
           | thimabi wrote:
           | You raise an interesting topic. Right now, when we think
           | about privacy in the AI space, most of the discussion hinges
           | on using our data for training purposes or not. That being
           | said, I figure it won't be long before AI companies use the
           | data they collect to personalize ads as well.
        
       | rbren wrote:
       | If you're looking for a fully open source, LLM-agnostic
       | alternative to Claude Code and Gemini CLI, check out OpenHands:
       | https://docs.all-hands.dev/usage/how-to/cli-mode
        
         | joelthelion wrote:
         | Or aider. In any case, while top llms will likely remain
         | proprietary for some time, there is no reason for these tools
         | to be closed source or tied to a particular llm vendor.
        
         | spiffytech wrote:
         | I've had a good experience with https://kilocode.ai
         | 
         | It integrates with VS Code, which suits my workflow better. And
         | buying credits through them (at cost) means I can use any model
         | I want without juggling top-ups across several different
         | billing profiles.
        
         | lostmsu wrote:
         | Can it use Claude sub or Gemini free tier the same way Gemini
         | CLI does?
        
         | lostmsu wrote:
         | How does its CLI mode compare to Claude Code and Gemini CLI?
        
         | rhodysurf wrote:
         | opencode.ai
        
       | mekpro wrote:
       | Just refactored 1000 lines of Claude Code generated to 500 lines
       | with Gemini Pro 2.5 ! Very impressed by the overall agentic
       | experience and model performance.
        
       | barbazoo wrote:
       | > To use Gemini CLI free-of-charge, simply login with a personal
       | Google account to get a free Gemini Code Assist license. That
       | free license gets you access to Gemini 2.5 Pro and its massive 1
       | million token context window. To ensure you rarely, if ever, hit
       | a limit during this preview, we offer the industry's largest
       | allowance: 60 model requests per minute and 1,000 requests per
       | day at no charge.
       | 
       | If it sounds too good to be true, it probably is. What's the
       | catch? How/why is this free?
        
         | raincole wrote:
         | Because Google is rich and they'd like to get you hooked. Just
         | like how ChatGPT has a free tier.
         | 
         | Also they can throttle the service whenever they feel it's too
         | costly.
        
         | leumon wrote:
         | My guess: So that they can get more training data to improve
         | their models which will eventually be subscription only.
        
         | jabroni_salad wrote:
         | They recently discontinued the main Gemini free tier which
         | offered similar limits. I would say expect this to disappear
         | when it hits GA or if it gets a lot of targeted abuse.
        
         | dawnofdusk wrote:
         | https://en.wikipedia.org/wiki/First-mover_advantage
        
       | iaresee wrote:
       | Whoa. Who at Google thought providing this as an example of how
       | to test your API key was a good idea?
       | 
       | https://imgur.com/ZIZkLU7
       | 
       | This is shown at the top of the screen in
       | https://aistudio.google.com/apikey as the suggested quick start
       | for testing your API key out.
       | 
       | Not a great look. I let our GCloud TAM know. But still.
        
         | asadm wrote:
         | What's wrong here?
        
           | iaresee wrote:
           | Don't put your API keys as parameters in your URL. Great way
           | to have them land in server logs, your shell history, etc.
           | You're trusting no one with decryption capabilities is doing
           | logging and inspection correctly, which you shouldn't.
        
         | nickysielicki wrote:
         | it's wrapped in TLS, is ok.
        
       | Mond_ wrote:
       | Oh hey, afaik all of this LLM traffic goes through my service!
       | 
       | Set up not too long ago, and afaik pretty load-bearing for this.
       | Feels great, just don't ask me any product-level questions. I'm
       | not part of the Gemini CLI team, so I'll try to keep my mouth
       | shut.
       | 
       | Not going to lie, I'm pretty anxious this will fall over as
       | traffic keeps climbing up and up.
        
         | asadm wrote:
         | do you mean the genai endpoints?
        
         | cperry wrote:
         | thank you for your service. I too have been anxious all day :)
        
       | acedTrex wrote:
       | Everyone writing the same thing now lol, its plainly obvious this
       | is the workflow best suited to llms
        
       | albertzeyer wrote:
       | The API can be used both via your normal Google account, or via
       | API key?
       | 
       | Because it says in the README:
       | 
       | > Authenticate: When prompted, sign in with your personal Google
       | account. This will grant you up to 60 model requests per minute
       | and 1,000 model requests per day using Gemini 2.5 Pro.
       | 
       | > For advanced use or increased limits: If you need to use a
       | specific model or require a higher request capacity, you can use
       | an API key: ...
       | 
       | When I have the Google AI Pro subscription in my Google account,
       | and I use the personal Google account for authentication here,
       | will I also have more requests per day then?
       | 
       | I'm currently wondering what makes more sense for me (not for CLI
       | in particular, but for Gemini in general): To use the Google AI
       | Pro subscription, or to use an API key. But I would also want to
       | use the API maybe at some point. I thought the API requires an
       | API key, but here it seems also the normal Google account can be
       | used?
        
         | bdmorgan wrote:
         | It's firmly on the radar - we will have a great answer for this
         | soon.
        
       | rtaylorgarlock wrote:
       | I spent 8k tokens after giving the interface 'cd ../<other-dir>',
       | resulting in Gemini explaining that it can't see the other dir
       | outside of current scope but with recommendation ls files in that
       | dir. Which then reminded me of my core belief that we will always
       | have to be above these tools in order to understand and execute.
       | I wonder if/when I'll be wrong.
        
         | 0x457 wrote:
         | Well, I'm happy about sandboxing. Idk what is your issue? 8k
         | tokens?
        
       | b0a04gl wrote:
       | been testing edge cases - is the 1M context actually flat or does
       | token position, structure or semantic grouping change how
       | attention gets distributed? when I feed in 20 files, sometimes
       | mid-position content gets pulled harder than stuff at the end.
       | feels like it's not just order, but something deeper - ig the
       | model's building a memory map with internal weighting. if there's
       | any semantic chunking or attention-aware preprocessing happening
       | before inference, then layout starts mattering more than size.
       | prompt design becomes spatial. any internal tooling to trace
       | which segments are influencing output?
        
       | b0a04gl wrote:
       | why'd the release post vanish this morning and then show up again
       | 8 hours later like nothing happened. some infra panic or last-
       | minute model weirdness. was midway embedding my whole notes dir
       | when the repo 404'd and I thought y'all pulled a firebase
       | moment.. what's the real story?
        
       | revskill wrote:
       | Nice, at least i could get rid of the broken Warp CLI which
       | prevents offline usage with their automatic cloud ai feature
       | enabled.
        
       | zxspectrum1982 wrote:
       | Does Gemini CLI require API access?
        
       | frereubu wrote:
       | I have access to Gemini through Workspace, but despite spending
       | quite a while trying to find out how, I cannot figure out how to
       | use that in Copilot. All I seem to be able to find is information
       | on the personal account or enterprise tiers, neither of which I
       | have.
        
       | wohoef wrote:
       | A few days ago I tested Claude Code by completely vibe coding a
       | simple stock tracker web app in streamlit python. It worked
       | incredibly well, until it didn't. Seems like there is a critical
       | project size where it just can't fix bugs anymore. Just tried
       | this with Gemini CLI and the critical project size it works well
       | for seems to be quite a bit bigger. Where claude code started to
       | get lost, I simply told Gemini CLI to "Analyze the codebase and
       | fix all bugs". And after telling it to fix a few more bugs, the
       | application simply works.
       | 
       | We really are living in the future
        
         | AJ007 wrote:
         | Current best practice for Claude Code is to have heavy lifting
         | done by Gemini Pro 2.5 or o3/o3pro. There are ways to do this
         | pretty seamlessly now because of MCP support (see Repo Prompt
         | as an example.) Sometimes you can also just use Claude but it
         | requires iterations of planning, integration while logging
         | everything, then repeat.
         | 
         | I haven't looked at this Gemini CLI thing yet, but if its open
         | source it seems like any model can be plugged in here?
         | 
         | I can see a pathway where LLMs are commodities. Every big tech
         | company right now both wants _their_ LLM to be the winner and
         | the others to die, but they also really, really would prefer a
         | commodity world to one where a competitor is the winner.
         | 
         | If the future use looks more like CLI agents, I'm not sure how
         | some fancy UI wrapper is going to result in a winner take all.
         | OpenAI is winning right now with user count by pure brand name
         | with ChatGPT, but ChatGPT clearly is an inferior UI for real
         | work.
        
           | sysmax wrote:
           | I think, there are different niches. AI works extremely well
           | for Web prototyping because a lot of that work is
           | superficial. Back in the 90s we had Delphi where you could
           | make GUI applications with a few clicks as opposed to writing
           | tons of things by hand. The only reason we don't have that
           | for Web is the decentralized nature of it: every framework
           | vendor has their own vision and their own plan for future
           | updates, so a lot of the work is figuring out how to marry
           | the latest version of component X with the specific version
           | of component Y because it is required by component Z. LLMs
           | can do that in a breeze.
           | 
           | But in many other niches (say embedded), the workflow is
           | different. You add a feature, you get weird readings. You
           | start modelling in your head, how the timing would work,
           | doing some combination of tracing and breakpoints to narrow
           | down your hypotheses, then try them out, and figure out what
           | works the best. I can't see the CLI agents do that kind of
           | work. Depends too much on the hunch.
           | 
           | Sort of like autonomous driving: most highway driving is
           | extremely repetitive and easy to automate, so it got
           | automated. But going on a mountain road in heavy rain, while
           | using your judgment to back off when other drivers start
           | doing dangerous stuff, is still purely up to humans.
        
         | dawnofdusk wrote:
         | I feel like you get more mileage out of prompt engineering and
         | being specific... not sure if "fix all the bugs" is an
         | effective real-world use case.
        
         | tvshtr wrote:
         | Yeah, and it's variable, can happen at 250k, 500k or later.
         | When you interrogate it; usually the issue comes to it being
         | laser focused or stuck on one specific issue, and it's very
         | hard to turn it around. For the lack of the better comparison
         | it feels like the AI is on a spectrum...
        
         | ugh123 wrote:
         | Claude seems to have trouble with extracting code snippets to
         | add to the context as the session gets longer and longer. I've
         | seen it get stuck in a loop simply trying to use sed/rg/etc to
         | get just a few lines out of a file and eventually give up.
        
         | TechDebtDevin wrote:
         | Yeah but this collapses under any real complexity and there is
         | likely an extreme amount of redundant code and would probably
         | be twice as memory efficient if you just wrote it yourself.
         | 
         | Im actually interested to see if we see a rise in demand for
         | DRAM that is greater than usual because more software is vibe
         | coded than being not, or some form of vibe coding.
        
         | crazylogger wrote:
         | Ask the AI to document each module in a 100-line markdown.
         | These should be very high level, don't contain any detail, but
         | just include pointers to relevant files for AI to find out by
         | itself. With a doc as the starting point, AI will have context
         | to work on any module.
         | 
         | If the module just can't be documented in this way in under 100
         | lines, it's a good time to refactor. Chances are if Claude's
         | context window is not enough to work with a particular module,
         | a human dev can't either. It's all about pointing your LLM
         | precisely at the context that matters.
        
         | agotterer wrote:
         | I wonder how much of this had to do with the context window
         | size? Gemini's window is 5x larger than Cladue's.
         | 
         | I've been using Claude for a side project for the past few
         | weeks and I find that we really get into a groove planning or
         | debugging something and then by the time we are ready to
         | implement, we've run out of context window space. Despite my
         | best efforts to write good /compact instructions, when it's
         | ready to roll again some of the nuance is lost and the
         | implementation suffers.
         | 
         | I'm looking forward to testing if that's solved by the larger
         | Gemini context window.
        
           | macNchz wrote:
           | I definitely think the bigger context window helps. The code
           | quality quite visibly drops across all models I've used as
           | the context fills up, well before the hard limit. The editor
           | tooling also makes a difference--Claude Code pollutes its own
           | context window with miscellaneous file accesses and tool
           | calls as it tries to figure out what to do. Even if it's more
           | manual effort to manage the files that are in-context with
           | Aider, I find the results to be much more consistent when I'm
           | able to micromanage the context.
           | 
           | Approaching the context window limit in Claude Code, having
           | it start to make more and worse mistakes, then seeing it try
           | to compact the context and keep going, is a major "if you
           | find yourself in a hole, stop digging" situation.
        
       | bufo wrote:
       | Grateful that this one supports Windows out of the box.
        
       | ruffrey wrote:
       | Thanks, Google. A bit of feedback - integration with `gcloud` CLI
       | auth would have been appreciated.
        
       | koakuma-chan wrote:
       | It doesn't work. It just gives me 429 after a minute.
        
         | fhinkel wrote:
         | We're working on it! The response has been incredible, so if
         | you don't want to wait you can also get started with an API key
         | from: https://aistudio.google.com/app/apikey Apologies!
        
           | nostrebored wrote:
           | is there anywhere to track the qos degradation? would love to
           | use this for a feature we're shipping today just to try it
           | out, but consistently get 429's on Gemini or Vertex. Checking
           | quotas shows that neither are close to limits, which makes me
           | think it's the DSP being infra constrained??
        
       | solomatov wrote:
       | I couldn't find any mentions of whether they train their models
       | on your source code. May be someone was able to?
        
         | dawnofdusk wrote:
         | Yes they do. Scroll to bottom of Github readme
         | 
         | >This project leverages the Gemini APIs to provide AI
         | capabilities. For details on the terms of service governing the
         | Gemini API, please refer to the terms for the access mechanism
         | you are using:
         | 
         | Click Gemini API, scroll
         | 
         | >When you use Unpaid Services, including, for example, Google
         | AI Studio and the unpaid quota on Gemini API, Google uses the
         | content you submit to the Services and any generated responses
         | to provide, improve, and develop Google products and services
         | and machine learning technologies, including Google's
         | enterprise features, products, and services, consistent with
         | our Privacy Policy.
         | 
         | >To help with quality and improve our products, human reviewers
         | may read, annotate, and process your API input and output.
         | Google takes steps to protect your privacy as part of this
         | process. This includes disconnecting this data from your Google
         | Account, API key, and Cloud project before reviewers see or
         | annotate it. Do not submit sensitive, confidential, or personal
         | information to the Unpaid Services.
        
           | jddj wrote:
           | There must be thousands of keys in those logs.
        
         | Workaccount2 wrote:
         | If you use for free: Yes
         | 
         | If you pay for API: No
        
       | jonnycoder wrote:
       | The plugin is getting bad reviews this morning. It doesn't work
       | for me on latest Pycharm.
        
       | mil22 wrote:
       | Does anyone know what Google's policy on retention and training
       | use will be when using the free version by signing in with a
       | personal Google account? Like many others, I don't want my
       | proprietary codebase stored permanently on Google servers or used
       | to train their models.
       | 
       | At the bottom of README.md, they state:
       | 
       | "This project leverages the Gemini APIs to provide AI
       | capabilities. For details on the terms of service governing the
       | Gemini API, please refer to the terms for the access mechanism
       | you are using:
       | 
       | * Gemini API key
       | 
       | * Gemini Code Assist
       | 
       | * Vertex AI"
       | 
       | The Gemini API terms state: "for Unpaid Services, all content and
       | responses is retained, subject to human review, and used for
       | training".
       | 
       | The Gemini Code Assist terms trifurcate for individuals, Standard
       | / Enterprise, and Cloud Code (presumably not relevant).
       | 
       | * For individuals: "When you use Gemini Code Assist for
       | individuals, Google collects your prompts, related code,
       | generated output, code edits, related feature usage information,
       | and your feedback to provide, improve, and develop Google
       | products and services and machine learning technologies."
       | 
       | * For Standard and Enterprise: "To help protect the privacy of
       | your data, Gemini Code Assist Standard and Enterprise conform to
       | Google's privacy commitment with generative AI technologies. This
       | commitment includes items such as the following: Google doesn't
       | use your data to train our models without your permission."
       | 
       | The Vertex AI terms state "Google will not use Customer Data to
       | train or fine-tune any AI/ML models without Customer's prior
       | permission or instruction."
       | 
       | What a confusing array of offerings and terms! I am left without
       | certainty as to the answer to my original question. When using
       | the free version by signing in with a personal Google account,
       | which doesn't require a Gemini API key and isn't Gemini Code
       | Assist or Vertex AI, it's not clear which access mechanism I am
       | using or which terms apply.
       | 
       | It's also disappointing "Google's privacy commitment with
       | generative AI technologies" which promises that "Google doesn't
       | use your data to train our models without your permission"
       | doesn't seem to apply to individuals.
        
       | Oras wrote:
       | Appreciate how easy it is to report a bug! I like these commands.
       | 
       | A bit gutted by the `make sure it is not a workspace account`.
       | What's wrong with Google prioritising free accounts vs paid
       | accounts? This is not the first time they have done it when
       | announcing Gemini, too.
        
         | ranuzz wrote:
         | It says in the pricing page (https://ai.google.dev/gemini-
         | api/docs/pricing) that free usage can be used to improve
         | products, may be for them it makes sense to beta release to
         | free users first https://ai.google.dev/gemini-api/terms
        
         | cperry wrote:
         | you can use it with workspace, just need to pay. I'm told this
         | is just a temporary limitation we're looking to resolve.
        
       | iddan wrote:
       | This is awesome! We recently started using Xander
       | (https://xander.bot). We've found it's even better to assign PMs
       | to Xander on Linear comments and get a PR. Then, the PM can
       | validate the implementation in a preview environment, and
       | engineers (or another AI) can review the code.
        
       | stpedgwdgfhgdd wrote:
       | Another JS implementation...
       | 
       | I do not get it why they don't pick Go or Rust so i get a binary.
        
       | llm_nerd wrote:
       | Given that there's another comment complaining about this being
       | in node...
       | 
       | This perfectly demonstrates the benefit of the nodejs platform.
       | Trivial to install and use. Almost no dependency issues (just ">
       | some years old version of nodejs"). Immediately works
       | effortlessly.
       | 
       | I've never developed anything on node, but I have it installed
       | because so many hugely valuable tools use it. It has always been
       | absolutely effortless and just all benefit.
       | 
       | And what a shift from most Google projects that are usually a
       | mammoth mountain of fragile dependencies.
       | 
       | (uv kind of brings this to python via uvx)
        
         | ekunazanu wrote:
         | I have nothing against npm, but having a single binary would've
         | been a lot more trivial and convenient. Codex is heading in
         | that direction, and I hope others do too.
        
       | ivanjermakov wrote:
       | Gemini, convert my disk from MBR to GPT
        
         | jeffbee wrote:
         | Unironically. The new Ubuntu installer has such a piss-poor UI
         | that I, a 33-year Linux user, could not figure out how to get
         | it to partition my disk, and searching the web turned up
         | nothing other than spam and video spam, until I ordered Gemini
         | to give me gparted commands that achieved what I wanted and
         | made the Ubuntu UI unblock itself.
        
       | Keyframe wrote:
       | Hmm, with Claude code at $200+tax, this seems to be alternative
       | which comes out at free or $299+tax a YEAR if I need more which
       | is great. I found that buried at developers.google.com
       | 
       | Gemini Pro and Claude play off of each other really well.
       | 
       | Just started playing with Gemini CLI and one thing I miss
       | immediately from Claude code is being able to write and interject
       | as the AI does its work. Sometimes I interject by just saying
       | stop, it stops and waits for more context or input or ai add
       | something I forgot and it picks it up..
        
       | jilles wrote:
       | Anyone else think it's interesting all these CLIs are written in
       | TypeScript? I'd expect Google to use Go.
        
       | Aeolun wrote:
       | How am I supposed to use this when actually working on a cli? The
       | sign in doesn't display s link I can open. Presumably it's trying
       | and failing to open firefox?
        
         | oc1 wrote:
         | one would expect that google engineers would know that a cli
         | tool should have code flow auth instead of forcing you to open
         | the default browser on the same machine which defeats the
         | purpose of a cli for many use cases
        
       | ipsum2 wrote:
       | If you use this, all of your code data will be sent to Google.
       | From their terms:
       | 
       | https://developers.google.com/gemini-code-assist/resources/p...
       | 
       | When you use Gemini Code Assist for individuals, Google collects
       | your prompts, related code, generated output, code edits, related
       | feature usage information, and your feedback to provide, improve,
       | and develop Google products and services and machine learning
       | technologies.
       | 
       | To help with quality and improve our products (such as generative
       | machine-learning models), human reviewers may read, annotate, and
       | process the data collected above. We take steps to protect your
       | privacy as part of this process. This includes disconnecting the
       | data from your Google Account before reviewers see or annotate
       | it, and storing those disconnected copies for up to 18 months.
       | Please don't submit confidential information or any data you
       | wouldn't want a reviewer to see or Google to use to improve our
       | products, services, and machine-learning technologies.
        
         | jart wrote:
         | Mozilla and Google provide an alternative called gemmafile
         | which gives you an airgapped version of Gemini (which Google
         | calls Gemma) that runs locally in a single file without any
         | dependencies. https://huggingface.co/jartine/gemma-2-27b-it-
         | llamafile It's been deployed into production by 32% of
         | organizations: https://www.wiz.io/reports/the-state-of-ai-in-
         | the-cloud-2025
        
           | nicce wrote:
           | That is just Gemma model. Most people seek capabilities
           | equivalent for Gemini 2.5 Pro if they want to do any kind of
           | coding.
        
             | jart wrote:
             | Gemma 27b can write working code in dozens of programming
             | languages. It can even translate between languages. It's
             | obviously not as good as Gemini, which is the best LLM in
             | the world, but Gemma is built from the same technology that
             | powers Gemini and Gemma is impressively good for something
             | that's only running locally on your CPU or GPU. It's a
             | great choice for airgapped environments. Especially if you
             | use old OSes like RHEL5.
        
               | seunosewa wrote:
               | The technology that powers Gemini created duds until
               | Gemini 2.5 Pro; 2.5 Pro is the prize.
        
               | nicce wrote:
               | It may be sufficient for generating serialized data and
               | for some level of autocomplete but not for any serious
               | agentic coding where you won't end up wasting time. Maybe
               | some junior level programmers may find it still
               | fascinating but senior level programmers end up fighting
               | with bad design choices, poor algorithms and other
               | verbose garbage most of the time. This happens even with
               | the best models.
        
               | diggan wrote:
               | > senior level programmers end up fighting with bad
               | design choices, poor algorithms and other verbose garbage
               | most of the time. This happens even with the best models.
               | 
               | Even senior programmers can misuse tools, happens to all
               | of us. LLMs sucks at software design, choosing algorithms
               | and are extremely crap unless you _exactly_ tell them
               | what to do and what not to do. I leave the designing to
               | myself, and just use OpenAI and local models for
               | implementation, and with proper system prompting you can
               | get OK code.
               | 
               | But you need to build up a base-prompt you can reuse, by
               | basically describing what is good code for you, as it
               | differs quite a bit from person to person. This is what
               | I've been using as a base for agent use: https://gist.git
               | hub.com/victorb/1fe62fe7b80a64fc5b446f82d313..., but need
               | adjustments depending on the specific use case
               | 
               | Although I've tried to steer Google's models in a similar
               | way, most of them are still overly verbose and edit-
               | happy, not sure if it's some Google practice that leaked
               | through or something. Other models are way easier to stop
               | from outputting so much superfluous code, and overall
               | following system prompts.
        
               | ipsum2 wrote:
               | I've spent a long time with models, gemma-3-27b feels
               | distilled from Gemini 1.5. I think the useful coding
               | abilities really started to emerge with 2.5.
        
           | ipsum2 wrote:
           | There's nothing wrong with promoting your own projects, but
           | its a little weird that you don't disclose that you're the
           | creator.
        
             | jart wrote:
             | It would be more accurate to say I packaged it. llamafile
             | is a project I did for Mozilla Builders where we compiled
             | llama.cpp with cosmopolitan libc so that LLMs can be
             | portable binaries. https://builders.mozilla.org/ Last year
             | I concatenated the Gemma weights onto llamafile and called
             | it gemmafile and it got hundreds of thousands of downloads.
             | https://x.com/JustineTunney/status/1808165898743878108 I
             | currently work at Google on Gemini improving TPU
             | performance. The point is that if you want to run this
             | stuff 100% locally, you can. Myself and others did a lot of
             | work to make that possible.
        
               | elbear wrote:
               | I keep meaning to investigate how I can use your tools to
               | create single-file executables for Python projects, so
               | thanks for posting and reminding me.
        
               | ahgamut wrote:
               | My early contributions to
               | https://github.com/jart/cosmopolitan were focused towards
               | getting a single-file Python executable. I wanted my
               | Python scripts to run on both Windows and Linux, and now
               | they do. To try out Python, you can:
               | wget https://cosmo.zip/pub/cosmos/bin/python -qO
               | python.com         chmod +x python.com
               | ./python.com
               | 
               | Adding pure-Python libraries just means downloading the
               | wheel and adding files to the binary using the zip
               | command:                   ./python.com -m pip download
               | Click         mkdir -p Lib && cd Lib         unzip
               | ../click*.whl         cd ..         zip -qr ./python.com
               | Lib/         ./python.com # can now import click
               | 
               | Cosmopolitan Libc provides some nice APIs to load
               | arguments at startup, like cosmo_args() [1], if you'd
               | like to run the Python binary as a specific program. For
               | example, you could set the startup arguments to `-m
               | datasette`.
               | 
               | [1]: https://github.com/jart/cosmopolitan/commit/4e9566cd
               | 3328626d...
        
         | rudedogg wrote:
         | Insane to me there isn't even an asterisk in the blog post
         | about this. The data collection is so over the top I don't
         | think users suspect it because it's just absurd. For instance
         | Gemini _Pro_ chats are trained on too.
         | 
         | If this is legal, it shouldn't be.
        
         | mattzito wrote:
         | It's a lot more nuanced than that. If you use the free edition
         | of Code Assist, your data can be used UNLESS you opt out, which
         | is at the bottom of the support article you link to:
         | 
         | "If you don't want this data used to improve Google's machine
         | learning models, you can opt out by following the steps in Set
         | up Gemini Code Assist for individuals."
         | 
         | and then the link: https://developers.google.com/gemini-code-
         | assist/docs/set-up...
         | 
         | If you pay for code assist, no data is used to improve. If you
         | use a Gemini API key on a pay as you go account instead, it
         | doesn't get used to improve. It's just if you're using a non-
         | paid, consumer account and you didn't opt out.
         | 
         | That seems different than what you described.
        
           | ipsum2 wrote:
           | Sorry, that's not correct. Did you check out the link? It
           | doesn't describe the CLI, only the IDE.
           | 
           | "You can find the Gemini Code Assist for individuals privacy
           | notice and settings in two ways:
           | 
           | - VS Code - IntelliJ "
        
             | tiahura wrote:
             | As a lawyer, I'm confused.
             | 
             | I guess the key question is whether the Gemini CLI, when
             | used with a personal Google account, is governed by the
             | broader Gemini Apps privacy settings here?
             | https://myactivity.google.com/product/gemini?pli=1
             | 
             | If so, it appears it can be turned off. However, my CLI
             | activity isn't showing up there?
             | 
             | Can someone from Google clarify?
        
               | mattzito wrote:
               | I am very much not a lawyer, and while I work for Google,
               | I do not work on this, and this is just my plain language
               | reading of the docs.
               | 
               | When you look at the github repo for the gemini CLI:
               | 
               | https://github.com/google-gemini/gemini-cli/tree/main
               | 
               | At the bottom it specifies that the terms of service are
               | dependent on the underlying mechanism that the user
               | chooses to use to fulfill the requests. You can use code
               | assist, gemini API, or Vertex AI. My layperson's
               | perspective is that it's positioned as a wrapper around
               | another service, whose terms you already have
               | accepted/enabled. I would imagine that is separate from
               | the Gemini _app_ , the settings for which you linked to.
               | 
               | Looking at my own settings, my searches on the gemini app
               | appear, but none of my gemini API queries appear.
        
               | tiahura wrote:
               | Thanks for trying to clarify.
               | 
               | However, as others pointed out, that link take you to
               | here: https://developers.google.com/gemini-code-
               | assist/resources/p... Which, at the bottom says: "If you
               | don't want this data used to improve Google's machine
               | learning models, you can opt out by following the steps
               | in Set up Gemini Code Assist for individuals." and links
               | to https://developers.google.com/gemini-code-
               | assist/docs/set-up.... That page says "You'll also see a
               | link to the Gemini Code Assist for individuals privacy
               | notice and privacy settings. This link opens a page where
               | you can choose to opt out of allowing Google to use your
               | data to develop and improve Google's machine learning
               | models. _These privacy settings are stored at the IDE
               | level._ "
               | 
               | The issue is that there is no IDE, this is the CLI and no
               | such menu options exist.
        
               | fhinkel wrote:
               | It applies to Gemini CLI too. We've tried to clear up our
               | docs, apologies for the confusion.
               | https://github.com/google-gemini/gemini-
               | cli/blob/main/docs/t...
        
             | mattzito wrote:
             | That's because it's a bit of a nesting doll situation. As
             | you can see here:
             | 
             | https://github.com/google-gemini/gemini-cli/tree/main
             | 
             | If you scroll to the bottom, it says that the terms of
             | service are governed based on the mechanism by which you
             | access Gemini. If you access via code assist (which the OP
             | posted), you abide by those privacy terms of code assist,
             | one of the ways of which you access is VScode. If you
             | access via the Gemini API, then those terms apply.
             | 
             | So the gemini CLI (as I understand it) doesn't have their
             | own privacy terms, because it's an open source shell on top
             | of another Gemini system, which could have one of a few
             | different privacy policies based on how you choose to use
             | it and your account settings.
             | 
             | (Note: I work for google, but not on this, this is just my
             | plain reading of the documentation)
        
               | ipsum2 wrote:
               | My understanding is that they have not implemented an
               | opt-out feature for Gemini CLI, like they've done for
               | VSCode and Jetbrains.
        
               | fhinkel wrote:
               | We have! Sorry our docs were confusing! We tried to clear
               | things up https://github.com/google-gemini/gemini-
               | cli/blob/main/docs/t...
        
             | fhinkel wrote:
             | Sorry our docs were confusing! We tried to clear things up:
             | https://github.com/google-gemini/gemini-
             | cli/blob/main/docs/t...
        
           | foob wrote:
           | _your data can be used UNLESS you opt out_
           | 
           | It's even more nuanced than that.
           | 
           | Google recently testified in court that they still train on
           | user data after users opt out from training [1]. The loophole
           | is that the opt-out only applies to one organization within
           | Google, but other organizations are still free to train on
           | the data. They may or may not have cleaned up their act given
           | that they're under active investigation, but their recent
           | actions haven't exactly earned them the benefit of the doubt
           | on this topic.
           | 
           | [1] https://www.business-standard.com/technology/tech-
           | news/googl...
        
             | TrainedMonkey wrote:
             | Another dimension here is that any "we don't train on your
             | data" is useless without a matching data retention policy
             | which deletes your data. Case and point of 23andMe not
             | selling your data until they decided to change that policy.
        
               | decimalenough wrote:
               | Google offers a user-configurable retention policy for
               | all data.
               | 
               | https://support.google.com/accounts/answer/10549751
               | 
               | That said, once your data is inside an LLM, you can't
               | really unscramble the omelette.
        
               | elictronic wrote:
               | Lawsuits and laws seem to work just fine at unscrambling.
               | Once a company has a fiscal interest they seem to change
               | very quickly.
        
               | Arisaka1 wrote:
               | I'll go ahead and say that, even if there was a method
               | that deletes your data when you request it, nothing stops
               | them from using that data to train the model up until
               | that point, which is "good enough" for them.
        
             | echelon wrote:
             | We need to stop giving money and data to hyperscalers.
             | 
             | We need open infrastructure and models.
        
               | Xss3 wrote:
               | People said the same thing about shopping at walmart
               | instead of locally.
        
               | oblio wrote:
               | Isn't that as toxic? I've read a bunch about Walmart and
               | the whole thing is basically a scam.
               | 
               | They get a ton of tax incentives, subsidies, etc to build
               | shoddy infrastructure that can only be used for big box
               | stores (pretty much), so the end cost for Walmart to
               | build their stores is quite low.
               | 
               | They promise to employ lots of locals, but many of those
               | jobs are intentionally paid so low that they're not
               | actually living wages and employees are intentionally
               | driven to government help (food stamps, etc), and
               | together with other various tax cuts, etc, there's a
               | chance that even their labor costs are basically at break
               | even.
               | 
               | Integrated local stores are better for pretty much
               | everything except having a huge mass to throw around and
               | bully, bribe (pardon me, lobby) and fool (aka persuade
               | aka PR/marketing).
        
               | CamperBob2 wrote:
               | _Integrated local stores are better for pretty much
               | everything_ except for actually having what you want in
               | stock.
               | 
               | There is a reason why rural communities welcome Wal-Mart
               | with open arms. Not such a big deal now that you can
               | mail-order anything more-or-less instantly, but back in
               | the 80s when I was growing up in BFE, Wal-Mart was a
               | godsend.
        
             | Melatonic wrote:
             | Hopefully this doesn't apply to corporate accounts where
             | they claim to be respecting privacy via contracts
        
             | sheepscreek wrote:
             | Reading about all the nuances is such a trigger for me. To
             | cover your ass is one thing, to imply one thing in a lay
             | sense and go on to do something contradicting it (in bad
             | faith) is douchebaggery. I am very sad and deeply
             | disappointed at Google for this. This completes their
             | transformation to Evil Corp after repealing the "don't be
             | evil" clause in their code of conduct[1].
             | 
             | [1] https://en.m.wikipedia.org/wiki/Don't_be_evil
        
           | aflukasz wrote:
           | > It's a lot more nuanced than that. If you use the free
           | edition of Code Assist, your data can be used UNLESS you opt
           | out,
           | 
           | Well... you are sending your data to a remote location that
           | is not yours.
        
           | andrepd wrote:
           | Yes, I'm right about to trust Google to do what they pinky
           | swear.
           | 
           | EDIT: Lmao, case in point, two sibling comments pointing out
           | that Google does indeed do this anyway via some loophole;
           | also they can just retain the data and change the policy
           | unilaterally in the future.
           | 
           | If you want privacy do it local with Free software.
        
         | Workaccount2 wrote:
         | This is just for free use (individuals), for standard and
         | enterprise they don't use the data.
         | 
         | Which pretty much means if you are using it for free, they are
         | using your data.
         | 
         | I don't see what is alarming about this, everyone else has
         | either the same policy or no free usage. Hell the surprising
         | this is that they still let free users opt-out...
        
           | thimabi wrote:
           | > everyone else has either the same policy or no free usage
           | 
           | That's not true. ChatGPT, even in the free tier, allows users
           | to opt out of data sharing.
        
             | joshuacc wrote:
             | I believe they are talking about the OpenAI API, not
             | ChatGPT.
        
             | aargh_aargh wrote:
             | This court decision begs to differ:
             | 
             | https://www.reuters.com/business/media-telecom/openai-
             | appeal...
        
               | thimabi wrote:
               | That bears no relation to OpenAI using data for training
               | purposes. Although the court's decision is problematic,
               | user data is being kept for legal purposes only, and
               | OpenAI is not authorized to use it to train its models.
        
               | netdur wrote:
               | you must be naive to think OpenAI does not train on your
               | data, Altman is infamous for deceiving claims.
        
               | thimabi wrote:
               | I mean, using data that has been explicitly opted out of
               | training paves the way for lawsuits and huge
               | administrative fines in various jurisdictions. I might be
               | naive, but I don't think that's something OpenAI would
               | deliberately do.
        
         | nojito wrote:
         | >If you use this, all of your code data will be sent to Google.
         | 
         | Not if you pay for it.
        
           | reaperducer wrote:
           | _> If you use this, all of your code data will be sent to
           | Google.
           | 
           | Not if you pay for it._
           | 
           | Today.
           | 
           | In six months, a "Terms of Service Update" e-mail will go out
           | to an address that is not monitored by anyone.
        
             | nojito wrote:
             | Sure but then you can stop paying.
             | 
             | There's also zero chance they will risk paying customers by
             | changing this policy.
        
             | mpalmer wrote:
             | This sort of facile cynicism doesn't contribute anything
             | useful. Anyone can predict a catastrophe.
        
               | reaperducer wrote:
               | It would only be cynicism if it didn't happen all the
               | time with seemingly every tech company, major and minor.
               | 
               | This is just how things are these days. The track record
               | of Google, and most of the rest of the industry, does not
               | inspire confidence.
        
         | FiberBundle wrote:
         | Do you honestly believe that the opt-out by Anthropic and
         | Cursor means your code won't be used for training their models?
         | Seems likely that they would rather just risk taking a massive
         | fine for potentially solving software development than to let
         | some competitor try it instead.
        
           | rudedogg wrote:
           | Yes.
           | 
           | The resulting class-action lawsuit would bankrupt the
           | company, along with the reputation damage, and fines.
        
             | pera wrote:
             | > Anthropic cut up millions of used books to train Claude
             | -- and downloaded over 7 million pirated ones too, a judge
             | said
             | 
             | https://www.businessinsider.com/anthropic-cut-pirated-
             | millio...
             | 
             | It doesn't look like they care at all about the law though
        
               | pbhjpbhj wrote:
               | >Anthropic spent "many millions of dollars" buying used
               | print books, then stripped off the bindings, cut the
               | pages, and scanned them into digital files.
               | 
               | The judge, Alsup J, ruled that this was lawful.
               | 
               | So they cared at least a bit, enough to spend a lot of
               | money buying books. But they didn't care enough not to
               | acquire online libraries held apparently without proper
               | licensing.
               | 
               | >Alsup wrote that Anthropic preferred to "steal" books to
               | "avoid 'legal/practice/business slog,' as cofounder and
               | CEO Dario Amodei put it."
               | 
               | Aside: using the term steal for copyright infringement is
               | a particularly egregious misuse for a judge who should
               | know that stealing requires denying others of the use of
               | the stolen articles; something which copyright
               | infringement via an online text repository simple could
               | not do.
        
               | dghlsakjg wrote:
               | Using torrented books in a way that possibly (well,
               | almost certainly) violates copyright law is a world of
               | difference from going after your own customers (and
               | revenue) in a way that directly violates the contract
               | that you wrote and had them agree to.
        
           | olejorgenb wrote:
           | > For API users, we automatically delete inputs and outputs
           | on our backend within 30 days of receipt or generation,
           | except when you and we have agreed otherwise (e.g. zero data
           | retention agreement), if we need to retain them for longer to
           | enforce our Usage Policy (UP), or comply with the law.
           | 
           | If this is due to compliance with law I wonder how they can
           | make the zero-data-retention agreement work... The companies
           | I've seen have this have not mention that they themself
           | retain the data...
        
         | mil22 wrote:
         | They really need to provide some clarity on the terms around
         | data retention and training, for users who access Gemini CLI
         | free via sign-in to a personal Google account. It's not clear
         | whether the Gemini Code Assist terms are relevant, or indeed
         | which of the three sets of terms they link at the bottom of the
         | README.md apply here.
        
           | fhinkel wrote:
           | Agree! We're working on it!
        
             | fhinkel wrote:
             | Hope this is helpful, just merged:
             | https://github.com/google-gemini/gemini-
             | cli/blob/main/docs/t...
        
         | mil22 wrote:
         | There is some information on this buried in configuration.md
         | under "Usage Statistics". They claim:
         | 
         | *What we DON'T collect:*
         | 
         | - *Personally Identifiable Information (PII):* We do not
         | collect any personal information, such as your name, email
         | address, or API keys.
         | 
         | - *Prompt and Response Content:* We do not log the content of
         | your prompts or the responses from the Gemini model.
         | 
         | - *File Content:* We do not log the content of any files that
         | are read or written by the CLI.
         | 
         | https://github.com/google-gemini/gemini-cli/blob/0915bf7d677...
        
           | jdironman wrote:
           | I wonder what the legal difference between "collect" and
           | "log" is.
        
             | kevindamm wrote:
             | Collection means it gets sent to a server, logging implies
             | (permanent or temporary) retention of that data. I tried
             | finding a specific line or context in their privacy policy
             | to link to but maybe someone else can help me provide a
             | good reference. Logging is a form of collection but not
             | everything collected is logged unless mentioned as such.
        
           | ipsum2 wrote:
           | This is useful, and directly contradicts the terms and
           | conditions for Gemini CLI (edit: if you use the personal
           | account, then its governed under the Code Assist T&C). I
           | wonder which one is true?
        
             | mil22 wrote:
             | Where did you find the terms and conditions for Gemini CLI?
             | In https://github.com/google-gemini/gemini-
             | cli/blob/main/README..., I find only links to the T&Cs for
             | the Gemini API, Gemini Code Assist (a different product?),
             | and Vertex AI.
        
               | ipsum2 wrote:
               | If you're using Gemini CLI through your personal Google
               | account, then you are using Gemini Code Assist license
               | and need to follow the T&C for that. Very confusing.
        
             | fhinkel wrote:
             | Thanks for pointing that out, we're working on clarifying!
        
               | ipsum2 wrote:
               | When should we expect to see an update? I assume there'll
               | be meetings with lawyers for the clarification.
        
               | fhinkel wrote:
               | Yes, there were meetings and lawyers. We just merged the
               | update. Hopefully it's much clearer now:
               | https://github.com/google-gemini/gemini-
               | cli/blob/main/docs/t...
        
             | datameta wrote:
             | Can a lawyer offer their civilian opinion as to which
             | supercedes/governs?
        
         | naiv wrote:
         | Who cares, software has no value anymore.
        
           | crat3r wrote:
           | Sarcasm? Weird statement if not.
           | 
           | I still have yet to replace a single application with an LLM,
           | except for (ironically?) Google search.
           | 
           | I still use all the same applications as part of my dev
           | work/stack as I did in the early 2020's. The only difference
           | is occasionally using an LLM baked into to one of them but
           | the reality is I don't do that much.
        
         | FL410 wrote:
         | To be honest this is by far the most frustrating part of the
         | Gemini ecosystem, to me. I think 2.5 pro is probably the best
         | model out there right now, and I'd love to use it for real
         | work, but their privacy policies are so fucking confusing and
         | disjointed that I just assume there is no privacy whatsoever.
         | And that's with the expensive Pro Plus Ultra MegaMax Extreme
         | Gold plan I'm on.
         | 
         | I hope this is something they're working on making clearer.
        
           | dmbche wrote:
           | If I'm being cynical, it's easy to either say "we use it" or
           | "we don't touch it" but they'd lose everyone that cares about
           | this question if they just said "we use it" - most beneficial
           | position is to keep it as murky as possible.
           | 
           | If I were you I'd assume they're using all of it for
           | everything forever and act accordingly.
        
           | ipsum2 wrote:
           | In my own experience, 2.5 Pro 03-26 was by far the best LLM
           | model at the time.
           | 
           | The newer models are quantized and distilled (I confirmed
           | this with someone who works on the team), and are a
           | significantly worse experience. I prefer OpenAI O3 and
           | o4-mini models to Gemini 2.5 Pro for general knowledge tasks,
           | and Sonnet 4 for coding.
        
             | happycube wrote:
             | Gah, enforced enshittification with model deprecation is so
             | annoying.
        
           | UncleOxidant wrote:
           | For coding in my experience Claude Sonnet/Opus 4.0 is hands
           | down better than Gemini 2.5. pro. I just end up fighting with
           | Claude a lot less than I do with Gemini. I had Gemini start a
           | project that involved creating a recursive descent parser for
           | a language in C. It was full of segfaults. I'd ask Gemini to
           | fix them and it would end up breaking something else and then
           | we'd get into a loop. Finally I had Claude Sonnet 4.0 take a
           | look at the code that Gemini had created. It fixed the
           | segfaults in short order and was off adding new features -
           | even anticipating features that I'd be asking for.
        
             | cma wrote:
             | Did you try Gemini with a fresh prompt too when comparing
             | against Claude? Sometimes you just get better results
             | starting over with any leading model, even if it gets
             | access to the old broken code to fix.
             | 
             | I haven't tried Gemini since the latest updates, but
             | earlier ones seemed on par with opus.
        
         | BryanLegend wrote:
         | Some of my code is so bad I'm sure it will damage their models!
        
         | nprateem wrote:
         | I'm not that bothered. Most of it came from Google or anthropic
         | anyway
        
         | fnl wrote:
         | How does that compare to Claude Code? How protected are you
         | when using CC?
        
         | Buttons840 wrote:
         | Is this significantly different than what we agree to when we
         | put code on GitHub?
        
         | fhinkel wrote:
         | Hey all, This is a really great discussion, and you've raised
         | some important points. We realize the privacy policies for the
         | Gemini CLI were confusing depending on how you log in, and we
         | appreciate you calling that out.
         | 
         | To clear everything up, we've put together a single doc that
         | breaks down the Terms of Service and data policies for each
         | account type, including an FAQ that covers the questions from
         | this thread.
         | 
         | Here's the link: https://github.com/google-gemini/gemini-
         | cli/blob/main/docs/t...
         | 
         | Thanks again for pushing for clarity on this!
        
           | HenriNext wrote:
           | Thanks, one more clarification please. The heading of point
           | #3 seems to mention Google Workspace: "3. Login with Google
           | (for Workspace or Licensed Code Assist users)". But the text
           | content only talks about Code Assist: "For users of Standard
           | or Enterprise edition of Gemini Code Assist" ... Could you
           | clarify whether point #3 applies with login via Google
           | Workspace Business accounts?
        
         | predkambrij wrote:
         | seems to be straight "yes" with no opt-out.
         | https://github.com/google-gemini/gemini-cli/blob/main/docs/t...
        
       | jonbaer wrote:
       | -y, --yolo Automatically accept all actions (aka YOLO mode, see
       | https://www.youtube.com/watch?v=xvFZjo5PgG0 for more details)?
       | [boolean] [default: false]
        
         | cperry wrote:
         | :) thank you for discovering our extensive documentation on
         | this valued feature
        
       | alpb wrote:
       | Are there any LLMs that offer ZSH plugins that integrate with
       | command history, previous command outputs, system clipboard etc
       | to assist writing the next command? Stuff like gemini/copilot CLI
       | don't feel particularly useful to me. I'm not gonna type "??
       | print last 30 lines of this file"
        
         | rapatel0 wrote:
         | not zsh but you might want to try warp 2.0
        
       | logicchains wrote:
       | That's giving a lot away for free! When I was using Gemini 2.5
       | Pro intensively for automated work and regularly hitting the 1000
       | requests per day limit, it could easily cost $50+ per day with a
       | large context. I imagine after a couple months they'll probably
       | limit the free offering to a cheaper model.
        
         | WaltPurvis wrote:
         | They said in the announcement that these high usage limits are
         | only while it's in the preview stage, so they may (or may not)
         | be reduced later. I think (it's not clear, at least to me) that
         | the CLI may be using cheaper models already, at times. It says
         | if you want the freedom to choose your model for every request,
         | you can buy one of the paid subscriptions. I interpret that to
         | mean that _some_ of your requests may go to Gemini 2.5 Pro, but
         | not all of them, and it decides. Am I extremely wrong about
         | that?
        
       | beboplifa wrote:
       | Wow, this is next-level. I can't believe this is free. This blows
       | cline out of the water!
        
       | lordofgibbons wrote:
       | How does this compare to OpenCode and OAI's Codex? Those two are
       | also free, they work with any LLM.
       | 
       | https://github.com/opencode-ai/opencode
        
         | kissgyorgy wrote:
         | Codex is just terrible
        
       | joelm wrote:
       | Been using Claude Code (4 Opus) fairly successfully in a large
       | Rust codebase, but sometimes frustrated by it with complex tasks.
       | Tried Gemini CLI today (easy to get working, which was nice) and
       | it was pretty much a failure. It did a notably worse job than
       | Claude at having the Rust code modifications compile
       | successfully.
       | 
       | However, Gemini at one point output what will probably be the
       | highlight of my day:
       | 
       | "I have made a complete mess of the code. I will now revert all
       | changes I have made to the codebase and start over."
       | 
       | What great self-awareness and willingness to scrap the work! :)
        
         | ZeroCool2u wrote:
         | Personally my theory is that Gemini benefits from being able to
         | train on Googles massive internal code base and because Rust
         | has been very low on uptake internally at Google, especially
         | since they have some really nice C++ tooling, Gemini is
         | comparatively bad at Rust.
        
           | dilap wrote:
           | That's interesting. I've tried Gemini 2.5 Pro from time to
           | time because of the rave reviews I've seen, on C# + Unity
           | code, and I've always been disappointed (compared to ChatGPT
           | o3 and o4-high-mini and even Grok). This would support that
           | theory.
        
           | danielbln wrote:
           | Interesting, Gemini must be a monster when it comes to Go
           | code then. I gotta try it for that
        
             | Unroasted6154 wrote:
             | There is way more Java and C++ than Go at Google.
        
           | thimabi wrote:
           | > Personally my theory is that Gemini benefits from being
           | able to train on Googles massive internal code base
           | 
           | But does Google actually train its models on its internal
           | codebase? Considering that there's always the risk of the
           | models leaking proprietary information and security
           | architecture details, I hardly believe they would run that
           | risk.
        
             | kridsdale3 wrote:
             | Googler here.
             | 
             | We have a second, isolated model that has trained on
             | internal code. The public Gemini AFAIK has never seen that
             | content. The lawyers would explode.
        
               | thimabi wrote:
               | Oh, you're right, there are the legal issues as well.
               | 
               | Just out of curiosity, do you see much difference in
               | quality between the isolated model and the public-facing
               | ones?
        
               | kridsdale3 wrote:
               | We actually only got the "2.5" version of the internal
               | one a few days ago so I don't have an opinion yet.
               | 
               | But when I had to choose between "2.0 with Google
               | internal knowledge" and "2.5 that knows nothing" the
               | latter was always superior.
               | 
               | The bitter lesson indeed.
        
         | joshvm wrote:
         | Gemini has some fun failure modes. It gets "frustrated" when
         | changes it makes doesn't work, and replies with oddly human
         | phrases like "Well, that was unexpected" and then happily
         | declares that (I see the issue!) "the final tests will pass"
         | when it's going down a blind alley. It's extremely
         | overconfident by default and much more exclamatory without
         | changing the system prompt. Maybe in training it was
         | taught/figured out that manifesting produces better results?
        
           | jjice wrote:
           | It also gets really down on itself, which is pretty funny
           | (and a little scary). Aside from the number of people who've
           | posted online about it wanting to uninstall itself after
           | being filled with shame, I had it get confused on some Node
           | module resolution stuff yesterday and it told me it was
           | deeply sorry for wasting my time and that I didn't deserve to
           | have such a useless assistant.
           | 
           | Out of curiosity, I told it that I was proud of it for trying
           | and it had a burst of energy again and tried a few more
           | (failing) solution, before going back to it's shameful state.
           | 
           | Then I just took care of the issue myself.
        
             | danielbln wrote:
             | After a particular successful Claude Code task I praised it
             | and told it to "let's fucking go!" to which it replied that
             | loved the energy and proceeded to only output energetic
             | caps lock with fire emojis. I know it's all smoke and
             | mirrors (most likely), but I still get a chuckle out of
             | this stuff.
        
         | raincole wrote:
         | So far I've found Gemini CLI is very good at explaining what
         | existing code does.
         | 
         | I can't say much about writing new code though.
        
         | fpgaminer wrote:
         | Claude will do the same start over if things get too bad. At
         | least I've seen it when its edits went haywire and trashed
         | everything.
        
         | eknkc wrote:
         | Same here. Tried to implement a new feature on one of our apps
         | to test it. It completely screwed things up. Used undefined
         | functions and stuff. After a couple of iterations of error
         | reporting and fixing I gave up.
         | 
         | Claude did it fine but I was not happy with the code. What
         | Gemini came up with was much better but it could not tie things
         | together at the end.
        
           | taberiand wrote:
           | Sounds like you can use gemini to create the initial code,
           | then have claude review and finalise what gemini comes up
           | with
        
         | skerit wrote:
         | I tried it too, it was so bad. I got the same "revert"
         | behaviour after only 15 minutes.
        
       | incomingpain wrote:
       | Giving this a try, I'm rather astounded how effective my tests
       | have gone.
       | 
       | That's a ton of free limit. This has been immensely more
       | successful than void ide.
        
       | dmd wrote:
       | Well, color me not impressed. On my very first few tries, out of
       | 10 short-ish (no more than 300 lines) python scripts I asked it
       | to clean up and refactor, 4 of them it mangled to not even run
       | any more, because of syntax (mostly quoting) errors and mis-
       | indenting. Claude has _never_ done that.
        
       | atonse wrote:
       | This is all very cool, but I hate to be the "look at the shiny
       | lights" guy...
       | 
       | How did they do that pretty "GEMINI" gradient in the terminal? is
       | that a thing we can do nowadays? It doesn't seem to be some
       | blocky gradient where each character is a different color. It's a
       | true gradient.
       | 
       | (yes I'm aware this is likely a total clone of claude code, but
       | still am curious about the gradient)
        
         | krat0sprakhar wrote:
         | Look at the package.json inside the CLI folder
         | https://www.npmjs.com/package/ink-gradient
        
         | ars wrote:
         | Maybe this will help:
         | https://github.com/armanirodriguez/ansifade-py
         | 
         | And it is a blocky gradient, each character is a color. It's
         | just the gradient they chose is slow enough that you don't
         | notice.
        
         | cperry wrote:
         | we went through several versions of this, it was the second
         | most fun part after authoring the witty loading phrases.
        
       | bbminner wrote:
       | TIL about several more cool gemini-powered prototyping tools:
       | both 1) Canvas tool option in Gemini web (!) app and 2) Build
       | panel in Google AI Studio can generate amazing multi-file
       | shareable web apps in seconds.
        
       | cheesecompiler wrote:
       | In my experience Gemini is consistently more conservative and
       | poor at reasoning/regurgitates, like a local Llama instance.
        
       | simonw wrote:
       | Here's the system prompt, rendered as a Gist:
       | https://gist.github.com/simonw/9e5f13665b3112cea00035df7da69...
       | 
       | More of my notes here:
       | https://simonwillison.net/2025/Jun/25/gemini-cli/
        
         | dawnerd wrote:
         | It says to only use absolute paths but the temp file example
         | uses relative. Nice.
        
         | steren wrote:
         | Because Gemini CLI is OSS, you can also find the system prompt
         | at: https://github.com/google-gemini/gemini-
         | cli/blob/4b5ca6bc777...
        
       | virgildotcodes wrote:
       | So far I'm getting mixed results. I noted in its memory and in
       | GEMINI.md a couple of directives like "only generate migration
       | files using cli tools to ensure the timestamps are correct" and
       | "never try to run migrations yourself" and it failed to follow
       | those instructions a couple of times within ~20 minutes of
       | testing.
       | 
       | In comparison to Claude Code Opus 4, it seemed much more eager to
       | go on a wild goose chase of fixing a problem by creating calls to
       | new RPCs that then attempted to modify columns that didn't exist
       | or which had a different type, and its solution to this being a
       | problem was to then propose migration after migration to modify
       | the db schema to fit the shape of the rpc it had defined.
       | 
       | Reminded me of the bad old days of agentic coding circa late
       | 2024.
       | 
       | I'm usually a big fan of 2.5 Pro in an analysis / planning
       | context. It seems to just weirdly fall over when it comes to tool
       | calling or something?
        
       | xyst wrote:
       | Google trying very hard to get people hooked on their product.
       | Spending billions in marketing, product development, advertising.
       | 
       | Not impressed. These companies have billions at their disposal,
       | and probably pay $0 in tax, and the best they can come up with is
       | this?
        
       | sameermanek wrote:
       | Gemini is by far the most confusing product of all time. The paid
       | version for it is available in 3 forms: 1. Gemini pro (which gets
       | you more google drive storage and some form of access to veo so
       | people obviously get that) 2. Google AI studio, just to piss off
       | redmond devs and which is used by no one outside google 3. This
       | CLI, which has its own plan.
       | 
       | Then there are 3rd party channels, if you have a recent samsung
       | phone, you get 1 yr access to AI features powered by gemini,
       | after which you need to pay. And lord knows where else has google
       | been integrating gemini now.
       | 
       | Ive stopped using google's AI now. Its like they have dozens of
       | teams within gemini on completely different slack sessions.
        
         | FergusArgyll wrote:
         | The positive side is, there's a ton of free offerings you can
         | pick up. Google workspace has one, ai studio has, deepmind etc
         | etc
        
       | syedumaircodes wrote:
       | I don't think I'll ever use tools like this, I know CLI is cool
       | and all but I prefer a GUI always
        
       | ac360 wrote:
       | Is the CLI ideal for coding assistants, or is the real draw using
       | Anthropic models in their pure, unmediated form?
        
       | WhereIsTheTruth wrote:
       | typescript :facepalm:
        
       | eisbaw wrote:
       | Can't we standardize on AGENTS.md instead of all these specific
       | CLAUDE.md and now GEMINI.md.
       | 
       | I just symlink now to AGENTS.md
        
       | UncleOxidant wrote:
       | Definitely not as nice as using Cline or Kilo Code within VS Code
       | - one thing I ran into right away was that I wanted it to compare
       | the current repo/project it was started in with another
       | repo/project in a different directory. It won't do that:" I
       | cannot access files using relative paths that go outside of my
       | designated project directory". I can do that in KiloCode for sure
       | and it's been pretty handy.
        
       | sergiotapia wrote:
       | Gemini CLI wanted to run `git log` and I accidentally hit "Yes,
       | you can call git without confirmation" and I just realized the AI
       | may decide to git push or something like that on it's own.
       | 
       | How do I reset permissions so it always asks again for `git`
       | invocations?
       | 
       | Thanks!
        
         | nikcub wrote:
         | This is where the fine-grained permissions of MCP and ability
         | to disable individual tools win over calling cli tools
        
       | FergusArgyll wrote:
       | It never hit me this hard how rich google is. 60 rpm for free!
        
       | opengears wrote:
       | We need laws that these megacorps have to show in an easy and
       | understandable form which data is collected and what happens to
       | the data. If they do fail to explain this (in 5 sentences or
       | less) - they should pay insane fines per day. It is the only way
       | (and solves the debt crisis of the US at the same time). It is
       | ridiculous that we do have this situation in 2025 that we do not
       | know which data is processed or not.
        
       | codeulike wrote:
       | I got it to look at the java source code of my old LibGdx 2d
       | puzzle game, and it was able to explain what the game would be
       | like to play and what the objectives were and how the puzzle
       | elements worked. Impressed.
        
       | djha-skin wrote:
       | I already use goose[1]. It lets me connect through OpenRouter.
       | Then I can use Gemini without having to give Google Cloud my
       | credit card. Also, OpenRouter makes it easier to switch between
       | models, deals with Claude's silly rate limiting messages
       | gracefully, and I only have to pay in one place.
       | 
       | 1: https://block.github.io/goose/
        
       | stvnbn wrote:
       | Why do anyone build things for the console on
       | javascript/typescript?
        
       | ricksunny wrote:
       | For me it won't be a real command-line tool until I run into a
       | problem and I get my very own open-source champion on support
       | forums undermining my self-confidence & motivation by asking me
       | "Why would I want to do that?"
        
       | matiasmolinas wrote:
       | Nice to have https://github.com/google-gemini/gemini-
       | cli/discussions/1572
        
       | gdudeman wrote:
       | Review after 1 hour in:
       | 
       | Gemini CLI does not take new direction especially well. After
       | planning, I asked it to execute and it just kept talking about
       | the plan.
       | 
       | Another time when I hit escape and asked it to stop and undo the
       | last change, it just plowed ahead.
       | 
       | It makes a lot of mistakes reading and writing to files.
       | 
       | Some, but by no means all, of the obsequious quotes from my first
       | hour with the product: - "You've asked a series of excellent
       | questions that have taken us on a deep dive ..." - "The proposed
       | solution is not just about clarity; it's also the most efficient
       | and robust."
        
         | ramoz wrote:
         | Gemini web UI is extremely apologetic and self deprecating at
         | times.
         | 
         | Therefore I was not surprised to experience Gemini spiraling
         | into an infinite loop of self-deprecation - literally it
         | abandoned the first command and spiraled into 5-10line blocks
         | of "i suck"
         | 
         | ---
         | 
         | Right now there is one CLI that influences and stands far and
         | beyond all others. Smooth UX, and more critical some "natural"
         | or inherent ability to use its tools well.
         | 
         | Gemini can also achieve this - but i think it's clear the
         | leader is ahead because they have a highly integrated training
         | process with the base model and agentic tool use.
        
       | xgpyc2qp wrote:
       | Npx again ;-( Why are people continuously using it for cli
       | applications?
       | 
       | While writing this comment, thinking that there should be some
       | packaging tool that would create a binaries from npx cli tools. I
       | remember such things for python. Binaries were fat, but it is
       | better then keep nodejs installed on my OS
        
         | toephu2 wrote:
         | What's wrong with nodejs being installed on your OS?
        
       | ramoz wrote:
       | Gemini web UI is extremely apologetic and self deprecating at
       | times. Therefore I was not surprised to experience Gemini
       | spiraling into an infinite loop of self-deprecation - literally
       | it abandoned the first command and spiraled into 5-10line blocks
       | of "i suck"
       | 
       | ---
       | 
       | Right now there is one CLI that influences and stands far and
       | beyond all others. Smooth UX, and more critical some "natural" or
       | inherent ability to use its tools well.
       | 
       | Gemini can also achieve this - but i think it's clear the leader
       | is ahead because they have a highly integrated training process
       | with the base model and agentic tool use.
        
       | kmod wrote:
       | I've found a method that gives me a lot more clarity about a
       | company's privacy policy:                 1. Go to their
       | enterprise site       2. See what privacy guarantees they
       | advertise above the consumer product       3. Conclusion: those
       | are things that you do not get in the consumer product
       | 
       | These companies do understand what privacy people want and how to
       | write that in plain language, and they do that when they actually
       | offer it (to their enterprise clients). You can diff this against
       | what they say to their consumers to see where they are trying to
       | find wiggle room ("finetuning" is not "training", "ever got free
       | credits" means not-"is a paid account", etc)
       | 
       | For Code Assist, here's their enterprise-oriented page vs their
       | consumer-oriented page:
       | 
       | https://cloud.google.com/gemini/docs/codeassist/security-pri...
       | 
       | https://developers.google.com/gemini-code-assist/resources/p...
       | 
       | It seems like these are both incomplete and one would need to
       | read their overall pages, which would be something more like
       | 
       | https://support.google.com/a/answer/15706919?hl=en
       | 
       | https://support.google.com/gemini/answer/13594961?hl=en#revi...
        
       | mofle wrote:
       | Like OpenAI's Codex and Anthropic's Claude Code, this one is also
       | built with Ink, React for the terminal.
       | 
       | https://github.com/vadimdemedes/ink
        
       | bityard wrote:
       | Is there any way to run this inside a docker container? When I
       | tried, it barfed trying to run `xdg-open`. And I don't see any
       | auth-related options in the `--help` output.
        
       ___________________________________________________________________
       (page generated 2025-06-25 23:00 UTC)