[HN Gopher] An update to our pricing
       ___________________________________________________________________
        
       An update to our pricing
        
       Author : mefengl
       Score  : 85 points
       Date   : 2025-04-21 19:05 UTC (3 hours ago)
        
 (HTM) web link (windsurf.com)
 (TXT) w3m dump (windsurf.com)
        
       | jawns wrote:
       | As an avid Windsurf user, I support this simplification of the
       | pricing.
       | 
       | In nearly all cases, I don't care how many individual steps the
       | model needs to take to accomplish the task. I just want it to do
       | what I've asked it to do.
       | 
       | It is curious, however, that this move is coinciding with rumors
       | of OpenAI attempting to acquire Windsurf. If an acquisition were
       | imminent, it would seem strange to mess with the pricing
       | structure soon beforehand.
        
         | jelling wrote:
         | Clarifying the pricing would make it easier to value the
         | revenue from current and future users. And naturally they
         | rounded up to leave themselves literal margin for error.
        
         | Jcampuzano2 wrote:
         | Curious - what makes you pick, windsurf over other editors. I
         | currently use Cursor but have seen more news about windsurf,
         | especially after the recent news with respect to OpenAI. Do you
         | find it better, worse, etc. And are there things it does better
         | for you than other editors.
        
           | leobuskin wrote:
           | I don't like the "vibe" term nowadays, but when you mix two
           | pretty abstract domains (AI and development), it's all about
           | vibes and aura. Some model/agent works perfectly for one of
           | us (let's keep in mind, we have a bunch of factors, from
           | language to the complexity of the implementation), and does
           | everything wrong for others.
           | 
           | You just can't measure it properly, outside of experiments
           | and building your own assessment within your context. All the
           | recommendations here just don't work. "Try all of them, stick
           | with one for a while, don't forget to retry others on a
           | regular basis" - that's my moto today.
           | 
           | Cursor (as an agent/orchestrator) didn't work for me at all
           | (Python, low-level, no frameworks, not webdev). I fell in
           | love with Windsurf ($60 tier initially). Switched entirely to
           | JetBrains AI a few days ago (vscode is not friendly for me,
           | PyCharm rocks), so happy about the price drop.
        
       | croes wrote:
       | Related?
       | 
       | https://news.ycombinator.com/item?id=43708725
        
         | dang wrote:
         | Thanks! Macroexpanded:
         | 
         |  _Why is OpenAI buying Windsurf?_ -
         | https://news.ycombinator.com/item?id=43743993 - April 2025 (218
         | comments)
         | 
         |  _OpenAI looked at buying Cursor creator before turning to
         | Windsurf_ - https://news.ycombinator.com/item?id=43716856 -
         | April 2025 (115 comments)
         | 
         |  _OpenAI in Talks to Buy Windsurf for About $3B_ -
         | https://news.ycombinator.com/item?id=43708725 - April 2025 (44
         | comments)
        
       | rudedogg wrote:
       | > To train, develop, and improve the artificial intelligence,
       | machine learning, and models that we use to support our Services.
       | We may use your Log and Usage Information and Prompts and Outputs
       | Information for this purpose.
       | 
       | https://windsurf.com/privacy-policy
       | 
       | Am I the only one bothered by this? Same with Gemini Advanced
       | (paid) training on your prompts. It feels like I'm paying with
       | money, but also handing over my entire codebase to improve your
       | products. Can't you do synthetic training data generation at this
       | point, along with the massive amount of Q/A online to not require
       | this?
        
         | simonw wrote:
         | Yeah that's a bad look. If I have an API key visible in my code
         | does that get packaged up as a "prompt" automatically? Could it
         | be spat out to some other user of a model in the future?
         | 
         | (I assume that there's a reason that wouldn't happen, but it
         | would be nice to know what that reason is.)
        
           | isjustintime wrote:
           | I'm also interested in the details on how this works in
           | practice. I know that there was a front page post a few weeks
           | ago about how Cursor worked, and there was a short blurb
           | about how sets of security prompts told the LLM to not do
           | things like hard code API keys, but nothing on the training
           | side.
        
         | blibble wrote:
         | it's the reason they bought it...
        
         | Workaccount2 wrote:
         | Gemini doesn't use paid API prompts for training.[1]
         | 
         | I believe its just for free usage and the web app.
         | 
         | [1]https://ai.google.dev/gemini-api/docs/pricing
        
           | Alifatisk wrote:
           | That's what I thought
        
           | rudedogg wrote:
           | Yeah, I was referring to their webapp/Chat, aka Gemini
           | Advanced. It uses your prompts for training unless you turn
           | off chat history completely, or are in their "Workspace"
           | enterprise version.
           | 
           | https://support.google.com/gemini/answer/13594961?hl=en
           | 
           | > What data is collected and how it's used
           | 
           | > Google collects your chats (including recordings of your
           | Gemini Live interactions), what you share with Gemini Apps
           | (like files, images, and screens), related product usage
           | information, your feedback, and info about your location.
           | Info about your location includes the general area from your
           | device, IP address, or Home or Work addresses in your Google
           | Account. Learn more about location data at
           | g.co/privacypolicy/location.
           | 
           | Google uses this data, consistent with our Privacy Policy, to
           | provide, improve, and develop Google products and services
           | and machine-learning technologies, including Google's
           | enterprise products such as Google Cloud.
           | 
           | Gemini Apps Activity is on by default if you are 18 or older.
           | Users under 18 can choose to turn it on. If your Gemini Apps
           | Activity setting is on, Google stores your Gemini Apps
           | activity with your Google Account for up to 18 months. You
           | can change this to 3 or 36 months in your Gemini Apps
           | Activity setting.
        
         | Alifatisk wrote:
         | No way Gemini Advanced user content is also being used for
         | training?
        
         | amelius wrote:
         | Windsurf: where the users provide the wind and they do all the
         | surfing.
        
         | kmeisthax wrote:
         | Without exception, every AI company is a play for your data. AI
         | _requires_ a continuing supply of new data to train on, it does
         | not  "get better" merely by using the existing trainsets with
         | more compute.
         | 
         | Furthermore, synthetic data is a flawed concept. At a minimum,
         | it tends to propagate and amplify biases in the model
         | generating the data. If you ignore that, there's also the
         | fundamental issue that data doesn't exist purely to run more
         | gradient descent, but to provide new information that isn't
         | already compressed into the existing model. Providing
         | additional copies of the same information cannot help.
        
           | kadushka wrote:
           | _it does not "get better" merely by using the existing
           | trainsets with more compute._
           | 
           | Pretty sure it does - that's the whole point of using more
           | test time compute. Also, a lot of research efforts goes into
           | improving data efficiency.
        
         | parliament32 wrote:
         | > Same with Gemini Advanced (paid) training on your prompts
         | 
         | I'm not sure if this is true.
         | 
         | > 17. Training Restriction. Google will not use Customer Data
         | to train or fine-tune any AI/ML models without Customer's prior
         | permission or instruction.
         | 
         | https://cloud.google.com/terms/service-terms
         | 
         | > This Generative AI for Google Workspace Privacy Hub covers...
         | the Gemini app on web (i.e. gemini.google.com) and mobile
         | (Android and iOS).
         | 
         | > Your content is not used for any other customers. Your
         | content is not human reviewed or used for Generative AI model
         | training outside your domain without permission.
         | 
         | > The prompts that a user enters when interacting with features
         | available in Gemini are not used beyond the context of the user
         | trust boundary. Prompt content is not used for training
         | generative AI models outside of your domain without your
         | permission.
         | 
         | > Does Google use my data (including prompts) to train
         | generative AI models? No. User prompts are considered customer
         | data under the Cloud Data Processing Addendum.
         | 
         | https://support.google.com/a/answer/15706919
        
           | simonw wrote:
           | Right, it's the free Gemini that has this:
           | https://ai.google.dev/gemini-api/terms#unpaid-services
           | 
           | > When you use Unpaid Services, including, for example,
           | Google AI Studio and the unpaid quota on Gemini API, Google
           | uses the content you submit to the Services and any generated
           | responses to provide, improve, and develop Google products
           | and services and machine learning technologies, including
           | Google's enterprise features, products, and services,
           | consistent with our Privacy Policy.
        
           | rudedogg wrote:
           | That's for Google Cloud APIs.
           | 
           | See my post here about Gemini Advanced (the web chat app)
           | https://news.ycombinator.com/item?id=43756269
        
         | graeme wrote:
         | Oh, that's not great. Cursor has a privacy mode where you can
         | avoid this.
         | 
         | >If you enable "Privacy Mode" in Cursor's settings: zero data
         | retention will be enabled, and none of your code will ever be
         | stored or trained on by us or any third-party.
         | 
         | https://www.cursor.com/privacy
        
         | 627467 wrote:
         | Hey we all want to keep and eat the cake, but I'm (kinda?)
         | surprised that people expect these services that have been
         | trained on large swaths of "available" data and now don't want
         | to contribute. Even if you're paying: why the selfishness?
        
           | sdesol wrote:
           | I think it is more of, LLMs should be treated as a utility
           | service. Unless Google and others can clearly show the
           | training data involved, the price that providers can charge
           | for LLMs should be capped. I have no issue with contributing
           | my conversations and my open source code, and I should expect
           | in return a fair price.
        
       | dockercompost wrote:
       | This is a welcome change. It feels bad when your 250 line doc
       | eats up 3 credits just being read and analyzed.
        
       | Animats wrote:
       | Could someone boil this down to "xx% price increase", please.
        
         | tomschwiha wrote:
         | As far as I understand it's No increase but an update.
        
       | elashri wrote:
       | I was a user on one of their pro plans with some discount. I
       | remember getting confused between the two tokens limits (flow
       | action and the other one). I am puzzled now trying to figure out
       | if the change corresponds to effective decrease or increase of
       | the pricing. They eliminated the flow action now. To be honest I
       | think it might be that I couldn't understand the change.
       | 
       | The only thing I notice is that for 250 additional credit you pay
       | $10 and at this point itis cheaper and better to get another
       | subscription for $15 which will give you another 500 instead of
       | $20. That is if you think you will need this number.
        
         | rohanphadte wrote:
         | The previous two token limits were: user prompt credits (500)
         | flow actions credits (1500)
         | 
         | For power users, flow actions would deplete much more quickly
         | (every time the LLM analyzed a file, edited, etc), so Windsurf
         | removed the flow action limit, so you're only getting charged
         | for 500 messages to the AI, which is strictly better for the
         | user.
        
       | SquareWheel wrote:
       | I tried Windsurf for the first time last week, and I had pretty
       | mixed results. On the positive side, sometimes the tool would
       | figure out exactly what I was doing, and it truly helped me. This
       | was especially the case when when performing a repetitive action,
       | like renaming something, or making the same change across
       | multiple related files.
       | 
       | Most of the time though, it just got in the way. I'd press tab to
       | indent a line, and it'd instead jump half-way down the file to
       | delete some random code instead. On more than one occasion I'd by
       | typing happily, and I'd see it had gone off and completely
       | mangled some unrelated section without my noticing. I felt like I
       | needed to be extremely attentive when reviewing commits to make
       | sure nothing was astray.
       | 
       | Most of its suggestions seemed hyper-fixated on changing my
       | indent levels, adding braces where they weren't supposed to go,
       | or deleting random comments. I also found it broke common
       | shortcuts, like tab (as above), and ctrl+delete.
       | 
       | The editor experience also felt visually very noisy. It was
       | constantly popping up overlays, highlighting things, and
       | generally distracting me while I was trying to write code. I
       | really wished for a "please shut up" button.
       | 
       | The chat feature also seemed iffy. It was actually able to
       | identify one bug for me, though many of the times I'd ask it to
       | investigate something, it'd get stuck scanning through files
       | endlessly until it just terminated the task with no output. I was
       | using the unlimited GPT-4.1 model, so maybe I needed to switch to
       | a model with more context length? I would have expected some kind
       | of error, at least.
       | 
       | So I don't know. Is anyone else having this experience with
       | Windsurf? Am I just "holding it wrong"? I see people being pretty
       | impressed with this and Cursor, but it hasn't clicked for me yet.
       | How do you get it to behave right?
        
         | mediaman wrote:
         | I find Cursor annoying too, with dumb suggestions getting in
         | the way of my attempts to tab-indent. They should make shift-
         | tab the default way to accept its suggestion, instead of tab,
         | or at least let shift-tab indent without accepting anything if
         | they really want to keep tab as default autocomplete.
        
         | jawns wrote:
         | I find it's very model-dependent. You would think that the more
         | powerful models would work the best, but that hasn't been the
         | case for me. Claude Sonnet tends to do the best job of
         | understanding my intent and not screwing things up.
         | 
         | I've also found that test-driven development is even more
         | critical for these tools than for human devs. Fortunately, it's
         | also far less of a chore.
        
       | braza wrote:
       | Honest question: what does it mean this new pricing that every
       | product has today in terms of "credits".
       | 
       | Sounds quite apache given the fact that you will need to track
       | utilisation all the time for something like prompting.
       | 
       | Does someone has any insight on why the old flat pricing or
       | utilisation prices are not in those new AI products where we have
       | this abstract concept as credit?
        
         | ariejan wrote:
         | With Windsurf I'm able to pick any of the premium language
         | models. I.e. Claude 3.7 Sonnet costs 1 credit / prompt, whereas
         | the thinking model costs 1.25 credits, and o3 costs a whopping
         | 7.5 credits.
         | 
         | It's simply passing on the cost of respective model's costs, I
         | think. I can image it's hard to come up with an affordable /
         | interesting flat rate _and_ support all those differently
         | priced models.
        
       | Jcampuzano2 wrote:
       | With all the new news about Windsurf and it being thrown into the
       | spotlight, the allure of the lower price than Cursor is
       | definitely there. But does it actually work on par or better?
       | 
       | Anybody use Windsurf as their daily driver and have experience
       | with other editors who can chime in, for those of us who are
       | considering it as an alternative?
        
       | sunaookami wrote:
       | How does Windsurf compare to Cursor? Does anybody have enough
       | experience with both? I'm only using Cursor right now but it
       | seems Windsurf is now a bit cheaper?
        
         | Abde-Notte wrote:
         | i've used both a bit. cursor has more features and feels more
         | polished, but windsurf is faster and cheaper. windsurf also has
         | a cleaner UI. if you don't need all the extra stuff in cursor,
         | windsurf's a pretty solid option.
        
       | kmeisthax wrote:
       | Solely from reading the headline, I'm going to assume "update"
       | means "price increase". I'm going to read the rest of the article
       | before continuing the comment.
       | 
       | Alright, I'm back, this sounds like one of the rare times in
       | which a pricing "update" is actually an update and not just a
       | disguised increase. But I'm also not a Windsurf user, so let me
       | jump through some of the comments and double-check that I didn't
       | get hoodwinked.
       | 
       | ...and the comments seem pretty positive, at least regarding the
       | pricing. So I think this is _actually_ one of those few times
       | "update" means "update".
        
       | amiantos wrote:
       | Cursor and Windsurf pricing really turned me off. I prefer Claude
       | Code's direct API costs, because it feels more quantifiable to me
       | cost wise. I can load up Claude Code, implement a feature, and
       | close it, and I get a solid dollar value of how much that cost
       | me. It makes it easier for me to mentally write off the low cost
       | of wasteful requests when the AI gets something wrong or starts
       | to spin its wheels.
       | 
       | With Cursor/Windsurf, you make requests, your allowed credit
       | quantity ticks down (which creates anxiety about running out),
       | and you're trying to do some mental math to figure out how those
       | requests that actually cost you. It feels like a method to
       | obfuscate the real cost to the user and also create an incentive
       | for the user to not use the product very much because of the
       | rapidly approaching limits during a focus/flow coding session. I
       | spent about an hour using Cursor Pro and had used up over 30% of
       | my monthly credits on something relatively small, which made me
       | realize their $20/mo plan likely was not going to meet my needs
       | and how much it would really cost me seemed like an unanswerable
       | question.
       | 
       | I just don't like it as a customer and it makes me very
       | suspicious of the business model as a result. I spent about $50
       | on a week with Claude Code, and could easily spend more I bet.
       | The idea that Cursor and Windsurf are suggesting a $20/mo plan
       | could be a good fit for someone like me, in the face of that $50
       | in one week figure, further illustrates that there is something
       | that doesn't quite match up with these 'credit' based billing
       | systems.
        
         | bogtog wrote:
         | > I spent about an hour using Cursor Pro and had used up over
         | 30% of my monthly credits
         | 
         | Sorry, but how is this possible? They give 500 credits in a
         | month for the "premium" queries. I don't even think I'd be able
         | to ask more than one question per minute even with tiny
         | requests. I haven't tried the Agent mode. Does that burn
         | through queries?
        
           | amiantos wrote:
           | I had to do a little digging to respond to this properly.
           | 
           | I was on the "Pro Trial" where you get 150 premium requests
           | and I had very quickly used 34 for them, which admittedly is
           | 22% and not 30%. Their pricing page says that the Free plan
           | includes "Pro two-week trial", but they do not explain that
           | on the pro trial you only get 150 premium requests and that
           | on the real Pro plan you get 500 premium requests. So you're
           | correct to be skeptical, I did not use 30% of 500 requests on
           | the Pro plan. I used 22% of the 150 requests you get on the
           | Trial Pro plan.
           | 
           | And yes, I think the agent mode can burn through credits
           | pretty quickly.
        
           | adamgordonbell wrote:
           | With an agent, one request could be 20 or more iterations
        
         | asdsadasdasd123 wrote:
         | How do you run out of premium requests, I've peeked at the
         | stats across my company and never seen this happen.
        
         | janpaul123 wrote:
         | Yeah, my colleague just wrote about this exact problem of
         | incentive misalignment with Cursor and Windsurf
         | https://blog.kilocode.ai/p/why-cursors-flat-fee-pricing-will
         | 
         | The economist in me is says "just show the prices", though the
         | psychologist in me says "that's hella stressful". ;)
        
           | ricksunny wrote:
           | Why does the psychologist in you say "that's hella
           | stressful", e.g. Stressful for who? What is the source of
           | their stress?
        
             | janpaul123 wrote:
             | Seeing the $ every time I do something, even if it's $0.50,
             | can be a little stressful. We should have an option to hide
             | it per-request and just show a progress bar for the current
             | topup.
        
         | newlisp wrote:
         | I haven't used either but reading Cursor's website, they let
         | you add your own Claude API key, do they still fiddle with your
         | requests using your own key?
        
           | amiantos wrote:
           | When you go to add your own API key into Cursor, you get a
           | warning message that several Cursor features cannot be used
           | if you plug in your own API key. I would totally have done
           | that if not for that message.
        
         | lherron wrote:
         | I chalk it up to VC subsidized pricing. I use my monthly Cursor
         | quota, then switch to Claude Code when I run out.
        
         | adamgordonbell wrote:
         | I use windsurf, have the largest plan and still need to top up
         | quite a bit.
         | 
         | So for me it has a price per task, sort of, because you are
         | topping it up by paying another 10 dollars at a time as you run
         | out.
         | 
         | The plans aren't the right size for professional work, but
         | maybe they wanted to keep the price points low?
        
         | moshegramovsky wrote:
         | I'm paying $30/month for Gemini. It's worth every damn penny
         | and then some and then some and then some. It's absolutely
         | amazing at code reviews. Much more thorough than a human code
         | reviewer, including me. It humbles me sometimes, which is good.
         | Unless you can't use Google products, I'd seriously give it a
         | try.
         | 
         | Now, I do still use ChatGPT sometimes. It recently helped me
         | find a very simple solution to a pretty obscure compiler error
         | that I'd never encountered in my several decades long career as
         | a developer. Gemini didn't even get close.
         | 
         | Most of the other services seem focused on using the AWS pay-
         | as-you-go pricing model. It's okay to use that pricing model
         | but it's not easy for me to pitch it at work when I can't say
         | exactly what the monthly cost is going to be. I love being a
         | developer, but I feel like I'd like it a lot less without
         | Gemini. I'm much more productive and less stressed too.
        
       | cirrus3 wrote:
       | Anything the requires me to use a different IDE is a non-starter
       | for me.
       | 
       | I can imagine it is a lot easier to develop these things as a
       | custom version of VSCode instead of plugins/extensions for a
       | handful of the popular existing IDEs, but is that really a good
       | long term plan? Is the future going to be littered with a bunch
       | of one-off custom IDEs? Does anyone want that future?
        
         | ramesh31 wrote:
         | >Anything the requires me to use a different IDE is a non-
         | starter for me.
         | 
         | Windsurf is, ultimately, just an IDE extension. They shipped a
         | forked VSCode with their branding for... some reason. But the
         | extension is available in practically every IDE/editor.
        
           | newlisp wrote:
           | The reason for forking is the restrictions of the vscode API,
           | so no, the extensions and the fork are not the same.
        
             | ramesh31 wrote:
             | The extension seems to provide the exact same
             | functionality, so I'm not sure what's really needed there.
             | In fact, I have had better results with the Jetbrains and
             | Sublime extensions than the Windsurf editor.
        
       | ramesh31 wrote:
       | As a daily full time Windsurf user, I can emphatically state that
       | it is strictly inferior to multiple free/open source tools like
       | Cline and Claude Code that work directly with Anthropic/OpenAI
       | keys, no "flow credits" involved.
       | 
       | Good on them for catching the enterprise market, but that's about
       | all it is; an enterprise friendly wrapper for a second-rate
       | VSCode extension.
        
         | victorbjorklund wrote:
         | Claude code is neither free nor open source.
        
           | ramesh31 wrote:
           | >Claude code is neither free nor open source.
           | 
           | "Free" as in no "middleman accounts" or other nonsense. You
           | pay the base token rate directly to Anthropic, and that's it.
        
             | Ylpertnodi wrote:
             | >"Free" as in [.....y]ou pay [...]
             | 
             | Possibly the worst comment I've ever read on this site.
             | And, by 'possibly', I mean 'most definitely'.
        
       ___________________________________________________________________
       (page generated 2025-04-21 23:00 UTC)