[HN Gopher] OpenAI API pricing update FAQ
       ___________________________________________________________________
        
       OpenAI API pricing update FAQ
        
       Author : thorum
       Score  : 65 points
       Date   : 2022-08-22 17:32 UTC (5 hours ago)
        
 (HTM) web link (help.openai.com)
 (TXT) w3m dump (help.openai.com)
        
       | keepquestioning wrote:
       | What are people actually using GPT-3 for?
        
         | aaroninsf wrote:
         | I can comment that a friend teaching college philosophy said
         | his department is eliminating papers at the undergraduate
         | level,
         | 
         | because it is now too easy to generate-then-polish papers at
         | that level.
         | 
         | They're too good in other words: better than baseline even. And
         | when touched up by a student who's putting in a modest effort,
         | are disruptively better.
        
           | paulryanrogers wrote:
           | Could that have something to do with the subjective nature of
           | philosophy? (IANAP)
        
             | elefanten wrote:
             | Probably has more to do with how slowly the field evolves
             | (esp. wrt what undergrads are learning)... so I imagine
             | there's a huge training corpus for what is ultimately a
             | very limited number of meaningful assignments to give.
        
         | nicksrose7224 wrote:
         | I am using it for a little project to make deciding between
         | Airbnbs easier: https://www.tripwise.me/
         | 
         | You basically just enter in the Airbnb URLs you're considering,
         | and it'll use GPT-3 to scrape through the reviews in order to
         | find anything negative or positive people are saying about the
         | listings.
         | 
         | You can also ask any question you want at all, and it'll return
         | answers (if people have mentioned it in their reviews). For
         | example "Is the air conditioning loud?"
         | 
         | I would love to hear what others are using GPT-3 for!
        
         | breck wrote:
         | 1. Generating text content for websites and blogs
         | 
         | 2. Translating documents from one language to another
         | 
         | 3. Writing e-mails, letters, and other forms of correspondence
         | 
         | 4. Crafting resumes and cover letters
         | 
         | 5. Writing marketing materials, such as brochures and flyers
         | 
         | 6. Responding to comments on HN.
         | 
         | (I added the last one)
        
           | candiddevmike wrote:
           | None of these usecases are things I want to be on the
           | receiving end of. Ick.
        
           | carvking wrote:
           | Did gpt3 write the first 5 ones ?
        
           | seibelj wrote:
           | Is any of that actually going to be a massive industry-
           | defining feature? Marketing blogs are already negative value
           | to me, relying on AI to write something as important as my
           | resume is a non-starter, and marketing emails already go
           | straight to the trash.
           | 
           | I'm very impressed with the AI image generation, but again,
           | it replaces... stock art for blog posts?
           | 
           | If I was a creative writer then I think the AI could help me
           | with plot outlines or get over writing blocks. But otherwise
           | I'm not grasping why this is going to automate some huge
           | number of jobs.
        
             | insightcheck wrote:
             | > "If I was a creative writer then I think the AI could
             | help me with plot outlines or get over writing blocks. But
             | otherwise I'm not grasping why this is going to automate
             | some huge number of jobs."
             | 
             | Writing definitely looks like the key use case. The New
             | Yorker used GPT-3 to write the concluding sentence for one
             | of their articles [0], and I believe The Economist did the
             | same (but for a slightly greater length; though I couldn't
             | find the source just now). It's actually a bit hard to come
             | up with a good conclusion, so this could save some effort
             | and trouble. Separately, GPT-3 can also likely be used for
             | articles about sports, especially if the purpose is to
             | track changes on a scoreboard and present the changes with
             | an article written in prose, versus displayed in a table.
             | 
             | It's quite fair that marketing blogs may not be interesting
             | to many readers, but they can be useful for SEO/boosting
             | the search engine rankings of a website. So, a cash-
             | strapped early or small company could save money from
             | freelance copywriting to GPT-3 if they're not looking for
             | anything fancy, and a lot of people work as freelance
             | copywriters.
             | 
             | I agree that this may not displace anyone in the high-
             | end/high quality segment of copywriting, but it could
             | reduce demand for content farms in the future (i.e. any
             | application where low-quality copywriting written on a
             | short notice is in-demand).
             | 
             | [0] https://www.newyorker.com/magazine/2019/10/14/can-a-
             | machine-...
        
       | hda2 wrote:
       | Any GPT-3-level fully-opensource models yet?
        
         | minimaxir wrote:
         | BLOOM and OPT are both GPT-3 sized models that are open source
         | and similarily sized (175B parameters) but a) text generation
         | is not as good as GPT-3 for whatever reason and b) getting them
         | to run in a cost-effective manner is not easy and with this new
         | price decrease, it may be more economical to use GPT-3 for
         | deployment despite that.
        
           | edgyquant wrote:
           | Is this due to entirely to economies of scale?
        
             | minimaxir wrote:
             | Not entirely. There are _many_ ways to optimize models for
             | deployment both at the software and hardware levels: some
             | are public such as quantization /fusing, others are not.
        
             | tehsauce wrote:
             | Economies of scale and years of optimization, some of which
             | likely have not been publicly shared.
        
           | tehsauce wrote:
           | The reason the similarly sized open source models aren't as
           | good is because gpt-3 has had a 2 year head start of
           | additional development and improvement. The gpt-3 that is
           | available today is Much better than what was initially
           | produced in 2020.
        
       | mad_ned wrote:
       | More Marshmallows!
        
       | hcks wrote:
       | I really wonder about the economics and business models of large
       | models APIs, mainly because of:
       | 
       | * large running costs due to the expensiveness of the inference
       | 
       | * low barriers to entry: the tech behind Dalle appeared 1.5 years
       | ago if I recall right and there are already a few competitors
       | (Midjourney, Imagen, Stable Diffusion)
       | 
       | * low value at the present time of the raw APIs
        
       | vanadium1st wrote:
       | Why is it still closed for users in Ukraine? Are there many other
       | countries excluded, or are we the only unlucky ones?
        
         | jacobr1 wrote:
         | I'm not sure about OpenAI's internal policies, but I've worked
         | for several SaaS companies that complied with Export
         | Administration Regulations (EAR) to limit access to "Embargoed
         | and Sanctioned Countries" which has meant CRIMEA - REGION OF
         | UKRAINE, CUBA, IRAN, NORTH KOREA, and SYRIA. ITAR (defense
         | related) and OFAC (finance related) both have further
         | restrictions. For us, it was easier to block all of Ukraine
         | rather than limit to access to Crimea in particular.
        
         | jon_adler wrote:
         | Trade compliance requirements may require embargoing occupied
         | Ukrainian territories however it isn't necessarily possible to
         | distinguish between occupied and unoccupied regions. Export
         | controls might be applied based on geoIP with granularity at
         | the country level. Unfortunately the safest position from a
         | compliance perspective is to block the entire country than risk
         | getting it wrong.
        
         | mccorrinall wrote:
         | Microsoft does microsoft things.
        
       | minimaxir wrote:
       | This is actually a big deal since this includes the davinci model
       | everyone uses and was the biggest obstacle to cost-affordable
       | deployment of GPT-3.
       | 
       | The exemption for finetuned models is interesting: since they
       | were 2x the price of the old prices, they're now 4x-6x more
       | expensive than the base models. I bet that's how OpenAI _really_
       | makes their money with enterprise.
        
         | dr_dshiv wrote:
         | Any good use cases for fine tuned models? I've been wanting to
         | put in all my previous academic works and references and see
         | how that changes things.
         | 
         | Otherwise, I've been using a model finetuned on the complete
         | works of Plato and I have really been enjoying the new
         | dialogues.
        
           | minimaxir wrote:
           | I made a blog post just last week about finetuned GPT-3,
           | albeit only on the lower end and not finetuning davinci:
           | https://news.ycombinator.com/item?id=32471208
           | 
           | I suspect some of the more custom "be a better writer"
           | services use a finetuned davinci.
        
             | teaearlgraycold wrote:
             | CopyAI uses fine tuning extensively
        
       | arbol wrote:
       | Any chance of this making GitHub copilot cheaper?
        
         | learndeeply wrote:
         | Out of curiosity, is $10/month prohibitively expensive for you
         | or other developers? Copilot is free for open source developers
         | and students.
        
         | karamanolev wrote:
         | Does Copilot use GPT-3 directly? I haven't found resources on
         | this, but sounds doubtful?
        
           | simonw wrote:
           | It uses Codex, which is a descendant of GPT-3:
           | https://openai.com/blog/openai-codex/
        
       | lukev wrote:
       | This makes me sad. I hoped to see less use of language models in
       | the wild; looks like we're heading the other direction.
       | 
       | I was recently able to articulate to myself why I dislike
       | language models like GPT-3 so much.
       | 
       | Language is fundamentally about _communication_. I have a mental
       | state, and language lets me share that some of that mental state
       | with another human being. It 's quite remarkable, and the
       | cornerstone for pretty much every aspect of our lives as social
       | creatures.
       | 
       | GPT-3 can generate text that's convincingly human-like; it's an
       | amazing technical achievement. But (so far, anyway!) it is _just_
       | that; a generator of plausible text. To borrow a philosophical
       | term, there 's no "interiority" there.
       | 
       | You can see this clearly in GPT-3's output; it is at its weakest
       | when writing about the relationships between a set of concepts,
       | or when trying to carry a line of thought or argument through a
       | longer text. This is because there is no underlying _thing_ that
       | is being communicated; no mental model or grounding concept. Just
       | text, generated solely based on plausibility in a given context.
       | 
       | So while GPT-3 is truly remarkable in its ability to generate
       | _text_ , it cannot, by definition, _communicate_ (except on the
       | meta level via prompt selection and output curation.)
       | 
       | Does the world really need more words but with less
       | communication?
        
         | iLoveOncall wrote:
         | > So while GPT-3 is truly remarkable in its ability to generate
         | text, it cannot, by definition, communicate
         | 
         | GPT-3 does nothing without a human inputting what it wants it
         | to output. GPT-3 is as much a tool for communication as are
         | sounds, words, symbols, etc.
         | 
         | Whether it was carved in stone by a human hand or generated on
         | the cloud by an AI, the intent to communicate is always coming
         | from a human.
         | 
         | If you think the text generated by GPT-3 does not carry the
         | right meaning or has "no underlying _thing_ that is being
         | communicated", it just means that the human behind is not using
         | the tool properly. Just like sometimes you use a word that
         | doesn't mean what you think it means.
        
         | ComplexSystems wrote:
         | I thought this was interesting so I literally pasted the entire
         | thing into OpenAI's playground and asked DaVinci to write a
         | rebuttal of your argument. Here is what it said:
         | 
         | ---
         | 
         | There are a few problems with LukeV's argument. First, he
         | conflates the ability to generate human-like text with the
         | ability to communicate. Just because GPT-3 can generate text
         | that sounds like it was written by a human does not mean that
         | it cannot communicate. In fact, GPT-3 is quite good at carrying
         | a line of thought or argument through a longer text, as
         | evidenced by the fact that it is often used by writers to
         | generate ideas and outlines for their articles.
         | 
         | Second, LukeV claims that GPT-3 is at its weakest when writing
         | about the relationships between a set of concepts. However,
         | this is not necessarily true. GPT-3 may not be able to generate
         | text that is perfectly grammatical or that makes perfect sense,
         | but it is often able to generate text that is nonetheless
         | meaningful and insightful. For example, GPT-3 has been used to
         | generate essays on complex topics such as the philosophy of
         | mind, and these essays have been well-received by readers.
         | 
         | Third, LukeV argues that the world does not need more words,
         | but this is clearly not the case. The world needs more
         | communication, and GPT-3 can help to facilitate this by
         | generating text that is meaningful and insightful. In fact,
         | GPT-3 is already being used by writers to generate ideas and
         | outlines for their articles, and it is likely that this use
         | will only increase in the future.
         | 
         | In conclusion, LukeV's argument against GPT-3 is flawed. GPT-3
         | is a valuable tool that can help to facilitate communication,
         | and the world needs more communication, not less.
         | 
         | ----
         | 
         | I'm not sure who I agree with, but that's GPT-1's response to
         | you regarding your critique of itself. Have at it!
        
           | WithinReason wrote:
           | I noticed that when GPT-3 has trouble continuing a sentence
           | or create a difficult argument it relies on filler words to
           | stretch out a sentence until it has no choice but to write
           | something meaningful.
        
           | reidjs wrote:
           | I'm siding with GPT in this debate. GPT is a writing tool
           | like MS word, grammarly, or the printing press. Its
           | innovative not destructive
        
           | stephendause wrote:
           | The argument seems very poorly reasoned to me.
           | 
           | > First, he conflates the ability to generate human-like text
           | with the ability to communicate.
           | 
           | This is incorrect. In fact, lukev does the opposite. That is
           | one of the fundamental parts of his argument -- that those
           | two things are different.
           | 
           | > Second, LukeV claims that GPT-3 is at its weakest when
           | writing about the relationships between a set of concepts.
           | However, this is not necessarily true. GPT-3 may not be able
           | to generate text that is perfectly grammatical or that makes
           | perfect sense, but it is often able to generate text that is
           | nonetheless meaningful and insightful. For example, GPT-3 has
           | been used to generate essays on complex topics such as the
           | philosophy of mind, and these essays have been well-received
           | by readers.
           | 
           | I believe this is the best paragraph of the argument, but it
           | is also the most difficult to rebut because it is rather
           | subjective. Whether a text is meaningful or insightful is up
           | the individual reading it. I would be surprised, however, if
           | experts on the philosophy of mind found the essays mentioned
           | by GPT-3 to be meaningful or insightful -- as opposed to
           | regurgitations of concepts that are in the texts that have
           | been fed to GPT-3's model.
           | 
           | > Third, LukeV argues that the world does not need more
           | words, but this is clearly not the case. The world needs more
           | communication...
           | 
           | The presence of more words in the world does not necessarily
           | entail more communication. The two concepts are fairly
           | closely related but are definitely not synonymous.
           | 
           | This doesn't necessarily mean that I agree with lukev, of
           | course. I do think that GPT-3 as it currently exists should
           | not be used to generate text that is meant for human
           | consumption without being edited first. I can certainly see
           | how it would be useful as a tool for generating an outline of
           | an essay, especially with well-crafted prompts.
           | 
           | I am of course open to more arguments and data on the subject
           | =)
        
       | AJRF wrote:
       | > We have been looking forward to reducing pricing for a long
       | time. Our teams have made incredible progress in making our
       | models more efficient to run, which has reduced the cost it takes
       | to serve them, and we are now passing these savings along to our
       | customers.
       | 
       | Nothing to do with Stable Diffusion then, no?
        
         | tvhahn wrote:
         | Why bother use a GPT-3 model when you can use the free BLOOM
         | model? Large "foundation" models will become a commodity -- a
         | race to the bottom.
        
           | grej wrote:
           | I have used BLOOM a little, and I struggle to get it to
           | produce the same kind of output as GPT-3. Anyone have tips
           | there? It often seems to not keep the right context of the
           | previously generated text.
        
             | aljungberg wrote:
             | GPT-3 has been fine-tuned after release to better interpret
             | prompts (see InstructGPT). Perhaps Bloom is more like the
             | original GPT-3; a little more raw and requiring better
             | prompt engineering?
             | 
             | In my small amount of testing of Bloom so far it seems
             | capable of advanced behaviour but it can indeed be trickier
             | to coax that out. Playing with temperature and sampling
             | matters for sure.
        
         | lee101 wrote:
         | Yes this move is I think forced hand by competitors like
         | https://text-generator.io etc which is already 10x cheaper.
         | 
         | If the open source community then other company's didn't follow
         | along with them then we would still be being price gauged in a
         | monopoly situation, hopefully same thing happens to dalle soon
        
         | JimmyAustin wrote:
         | Given that Dall-E 2's pricing hasn't changed, probably not.
        
           | aaroninsf wrote:
           | Now that StableDiffusion is released, I expect that to
           | change, unless there is a (certainly ultimately doomed)
           | attempt to Hold the Line.
        
         | choppaface wrote:
         | It's not just Stable Diffusion the model, but the research is
         | become really actually _open_ unlike "open ai"
        
         | cma wrote:
         | Could be the GPU price crash as well (even if they use Google's
         | TPUs, TPU demand would presumably go down in a GPU price
         | crash).
        
           | tomComb wrote:
           | Microsoft bought control of OpenAI so I believe this would
           | all be running on Azure (so would not be using Google TPU's).
        
       | intrasight wrote:
       | Honest question: What does GPT-3 add to my tookbox as a
       | developer?
        
         | ivalm wrote:
         | Maybe not for a developed, but for an AI based startup:
         | 
         | 1. Generate synthetic data that is well aligned to your needs.
         | With careful prompting + ensembling + after-fact human
         | filtering you can generate a lot of very particular human-like
         | data that you can then used to train/etc your product.
         | 
         | 2. Generate labels. gpt-3 can give pretty good NLU results
         | through appropriate prompting. You can do multiple prompts +
         | ensembling to get very good labels on free text (sentiment,
         | entity linking, intent, etc).
         | 
         | In both above use cases you can actually avoid deploying gpt-3
         | as part of client facing product, but instead leverage gpt-3 to
         | train smaller "on-rails" models/rules/etc.
        
         | thorum wrote:
         | GitHub Copilot is built on GPT3.
        
         | powerhour wrote:
         | I wonder if anyone has successfully used it to create library
         | documentation. Obviously you'd have to tweak whatever output
         | you get but can GPT-3 provide a substantial starting point?
        
           | ianbutler wrote:
           | I've used it in a previous tool to generate documentation
           | from code snippets, it works out pretty well.
        
           | moffkalast wrote:
           | Yes, there are several VScode plugins for it. Not sure if
           | they use GPT-3 specifically or one of the slightly smaller
           | versions though.
        
           | guipsp wrote:
           | One thing to note (without commenting if this applies in this
           | case or not) is that sometimes a bad starting point is worse
           | than no starting point
        
         | simonw wrote:
         | We're still figuring that out. It's a really fascinating piece
         | of tech, and has all kinds of non-obvious applications.
         | 
         | I wrote about using it to explain code (and mathematical
         | formulas and suchlike) a few weeks ago:
         | https://simonwillison.net/2022/Jul/9/gpt-3-explain-code/
         | 
         | I've been experimenting with using it to build a human-
         | language-to-SQL tool, so that people can ask questions of their
         | data "what country had the highest GDP in 2019" and it will
         | turn them into the correct SQL query, given a table schema. I'm
         | still iterating on this but it's shown some very promising
         | initial results.
         | 
         | I use it a lot when I need to get something small working in a
         | language that I don't have day-to-day familiarity with. "Write
         | a bash script that loops through every MOV file in this folder
         | and extracts the audio as MP3" is a good example of that kind
         | of prompt.
         | 
         | Riley Goodside on Twitter posts weird and interesting new
         | things you can do with GPT-3 on a daily basis:
         | https://twitter.com/goodside/ - his instructional template
         | trick was on HN the other day, it's really clever:
         | https://news.ycombinator.com/item?id=32532875
        
       | a-dub wrote:
       | it's mostly just kinda like this https://youtu.be/5KMzpZjhUiQ
        
       | dbrueck wrote:
       | Is it just me, or are phrases like "3x cheaper" hard for other
       | people to grok too? For some reason they always require me to
       | pause and try to sort it out. Like, is the price being reduced by
       | 66% or 75% or something else? What would "1x cheaper" mean?
        
         | redox99 wrote:
         | I process "3x cheaper" as "for the same money, I can get 3x the
         | original amount".
         | 
         | Which would be 33% the original price.
        
           | [deleted]
        
           | noja wrote:
           | The problem is the word "cheaper".
           | 
           | For the opposite word, "expensive", it all depends on the
           | word before it: "as expensive" or "more expensive".
           | 
           | "100 widgets cost $100. Unit price is $1." They are now twice
           | _as_ expensive -> they now cost $200. Unit price is now $2.
           | 
           | "100 widgets cost $100. Unit price is $1." They are now two
           | times _more_ expensive -> they now cost $300. Unit price is
           | now $3.
           | 
           | But How do you do this with the word "cheap"? Does "cheaper"
           | clearly mean "3x as cheap" or "3x more cheap" (which even
           | sounds a bit wrong)? I guess it means "more cheap" means
           | "cheaper".
           | 
           | Better to avoid the problem and say "They are now two times
           | the cost, or half the cost". Which is much clearer.
        
             | rfiat wrote:
             | > For the opposite word, "expensive", it all depends on the
             | word before it: "as expensive" or "more expensive".
             | 
             | Frustratingly, I find this isn't always true in practice.
             | Lots of people use "x times as y" and "x times more y"
             | interchangeably. To avoid ambiguity I try to only use the
             | former in any context where precision is useful.
        
               | noja wrote:
               | True. But people also say "percentage" when they mean
               | "percentage points". Once they notice the difference they
               | tend to stay right.
        
           | synu wrote:
           | What would 1x cheaper mean? 100% of the original price?
        
             | [deleted]
        
             | mattnewton wrote:
             | Yes?
        
             | shard wrote:
             | That reminds of the sign "Buy one, get one". Does that mean
             | you pay for an item and you get that item, or you pay for
             | an item and you get another item for free? I expect the
             | meaning to the be the second one, but my brain parses it to
             | be the first one.
        
               | mattnewton wrote:
               | That's because it they have left off the word "free" -
               | it's just an odd shortening of the less ambiguous "buy
               | one get one free", probably made by someone who is very
               | familiar with seeing "buy one get one free"
        
         | squeaky-clean wrote:
         | To open.ai's credit, I don't see that text anywhere on the page
         | or in the page title. The only info I see on the page about the
         | price change is.
         | 
         | > We're making our API more affordable on September 1, thanks
         | to progress in making our models run more efficiently.
         | 
         | If you visit the page the price change is pretty clear, they
         | show the before and after. 1k tokens will be anywhere from 1/2
         | to 1/3 the current price depending on which model you're using.
        
         | solarkraft wrote:
         | Yep, every time.
        
         | papandada wrote:
         | From my copy editing days, it's drilled into me to never write
         | "n times less" -- I get that it means "1/n" and that's how I'd
         | rewrite it, but yes, it's notoriously confusing, or at least
         | imprecise.
        
           | copperx wrote:
           | I think it's very precise. Just multiply the last cheapness
           | value times three. If last month there was a 10% discount,
           | today's discount is 30%.
        
             | papandada wrote:
             | How does that apply here?
        
         | epicide wrote:
         | For me, I tend to think of "3x" and "cheaper" as separate
         | operations. So "3x cheaper" sounds like you're saying
         | _new_price = price - 3*price_.
         | 
         | That (usually) makes no sense, so I assume people mean
         | something else when they say it, but I never have any clue
         | what. Is it a third of the original price? Is it two thirds (
         | _new_price = price - price /3_)?
         | 
         | Percentages also work this way. If I say "10% cheaper", most
         | people understand that I mean the price is 90% of what it was
         | previously. Although percentages get weird when increasing the
         | price.
         | 
         | e.g. if it was a flat doubling, I would say that as "200% of
         | the price" ( _new_price = price*(200 /100)_).
         | 
         | However, I take "200% _more_ expensive " to mean _new_price =
         | price + price*(200 /100)_. To me, the "more" implies addition
         | specifically.
        
           | Judgmentality wrote:
           | https://xkcd.com/985/
        
           | Jensson wrote:
           | People mix up these two:
           | 
           | "Its cheaper, you get 3 times as much for the same price"
           | 
           | "Its cheaper, it costs 66% less"
           | 
           | Most people aren't well versed in math and just says
           | something that kinda makes sense to them based on the numbers
           | they know.
        
         | JRandomHacker42 wrote:
         | One of my former colleagues who was always in charge of
         | publishing our team's performance testing results was a
         | complete hardass on the rest of us for using proper language
         | around multipliers and superlatives. It's well worth the
         | clarity to avoid precisely this situation.
        
         | kredd wrote:
         | I usually read that as "new price = old price / X" where X =
         | "3" in this scenario. In my head, "1x cheaper" would mean "new
         | price = old price / 1", so no reduction, I guess.
        
           | cbm-vic-20 wrote:
           | 3x cheaper = (1/3)x more expensive. I suppose that also means
           | when tripling the price of something we can say it's just
           | (1/3)x cheaper.
        
         | dr_dshiv wrote:
         | I know what you mean. 1x cheaper would be same price. 3x
         | cheaper means "the new price times 3 equals the old price."
         | However this is easier than percents or fractions. 100% cheaper
         | means free. Do you want something 67% cheaper, 2/3s cheaper or
         | 3x cheaper?
        
           | floxy wrote:
           | >1x cheaper would be same price.
           | 
           | Let's say you have a product for sale at $32. Then you have a
           | sale to boost name recognition, and have a coupon for $32
           | dollars off (limited time offer). Or equivalently, that would
           | be $32 cheaper. So the net price is free ($0). $32 cheaper =
           | 100% cheaper = 1x cheaper. Likewise, 25% cheaper = 25% less
           | expensive, or doing the math $32 - 0.25*$32 = 0.75*$32 = $24.
        
             | [deleted]
        
             | dr_dshiv wrote:
             | That's precisely why it's confusing, yes. You can't equate
             | 100% cheaper with 1x cheaper. 99% cheaper is sensible. 200%
             | cheaper is not. 2x cheaper is. I'd love a cogent reason to
             | explain this convention.
        
               | Jensson wrote:
               | > I'd love a cogent reason to explain this convention.
               | 
               | "cheaper" isn't a defined operation except for "a is
               | cheaper than b".
               | 
               | So the statements "6% cheaper" and "2 times cheaper" are
               | shorts for "it is cheaper, it costs 6% less" and "it is
               | cheaper, you get 2 times more for the same amount of
               | money".
        
               | floxy wrote:
               | >2x cheaper is <sensible>.
               | 
               | No, it isn't sensible, and is certainly confusing, and
               | that is why you should commit to never using that
               | phrasing from this day forward. It isn't like there is a
               | hardship you have to overcome when using precise
               | language. There are plenty of alternative ways to state
               | the desired objective that are clear and unambiguous:
               | - The product is now half-priced!       - The item costs
               | half as much!       - The unit comes with a 50% discount!
        
               | ckosidows wrote:
               | I think 1x is baseline. If you can buy two for the
               | original price, it's 2x cheaper. 10 for the same price?
               | 10x cheaper.
               | 
               | The parent you're replying to seems to have things
               | confused. 100% off isn't 1x cheaper -- it's infinitely
               | cheaper.
        
         | osigurdson wrote:
         | It probably means that it has 3X better value. If I spent $1
         | previously to do X, I can now do 3X for the same $1.
         | 
         | I think this is more intuitive for most people than using
         | percent difference for the same reason using a raw ratio (speed
         | up) is better for expressing performance improvement.
        
         | throwawaycuriou wrote:
         | Contrarily I wish people used 'Nx times' more often, especially
         | for 10x in place of 'an order of magnitude'. It's more
         | welcoming and less pretentious.
        
           | bitdivision wrote:
           | I'm not sure about using 10x in place of 'an order of
           | magnitude'. An order of magnitude conveys an approximation,
           | whereas 10x is precise.
        
             | IncRnd wrote:
             | 10x is literally the difference of an order of magnitude.
             | An order of magnitude is an exponential change of plus or
             | minus 1 in the       value of a quantity or unit. The term
             | is generally used in conjunction with       power-of-10
             | scientific notation.            Order of magnitude is used
             | to make the size of numbers and measurements of
             | things more intuitive and understandable. It is generally
             | used to provide       approximate comparisons between two
             | numbers. For example, if the       circumference of the Sun
             | is compared with the circumference of the Earth,       the
             | Sun's circumference would be described as many orders of
             | magnitude       larger than the Earth's. [1]
             | An order of magnitude is an approximation of the logarithm
             | of a value relative to some       contextually understood
             | reference value, usually 10, interpreted as the base of the
             | logarithm and the representative of values of magnitude
             | one. [2]
             | 
             | [1] https://www.techtarget.com/whatis/definition/order-of-
             | magnit...
             | 
             | [2] https://en.wikipedia.org/wiki/Order_of_magnitude
        
               | bitdivision wrote:
               | Yes, an order of magnitude literally means 10x. But in my
               | experience, in common speech it's often used to convey an
               | approximation. i.e. 'Changing this will decrease
               | performance of that endpoint by an order of magnitude',
               | when it's really somewhere around 10x. If someone said
               | 'decrease performance by 10x' that seems much more
               | concrete to me.
        
               | IncRnd wrote:
               | Not exactly. An order of magnitude change doesn't always
               | mean 10x in base 10! 15 and 150 differ by an order of
               | magnitude but so do 15 and 180.
        
               | sebzim4500 wrote:
               | Yeah but it's like the difference between someone saying
               | that their height is 1.8m and saying it is 1.800m. They
               | imply different levels of precision.
        
               | [deleted]
        
             | gausswho wrote:
             | To a mathematician the 0 signals the author intends less
             | precision, but this is not colloquial use
        
         | umanwizard wrote:
         | Consider "cheapness" to be the reciprocal of price; thus,
         | something that costs $30 has a cheapness score of 1/30
         | antidollars.
         | 
         | For this item to become 3x cheaper would mean for its cheapness
         | score to be multiplied by 3, thus 1/10 antidollars. Thus "3x
         | cheaper" means the same thing as "price multipled by one
         | third".
        
           | copperx wrote:
           | I like this hypothesis; however, if that was indeed the
           | original meaning, it would have more sense to advertise it as
           | a 33% discount, or as slashing the price by 1/3. "3x cheaper"
           | seems to be aimed to the average Joe shopping for shoes, not
           | to a developer who knows arithmetic.
        
             | umanwizard wrote:
             | It's not the same thing as a 33% discount, nor slashing the
             | price by 1/3. It's slashing the price by 2/3.
             | 
             | A 33% discount would be "1.5x cheaper" by my analysis.
        
         | IncRnd wrote:
         | No, it's not just you. When you "times" something it is
         | multiplication, not division. The phrasing is a logical
         | impossibility, where the reader has to guess the intent of the
         | author - of which there are several interpretations. The key
         | here is that this is a language "thing" only and certainly not
         | a construct of logic or math.
         | 
         | In the headline of this page, it wouldn't have taken but a
         | moment to write a phrasing that would be understandable to all
         | readers of the page. It could read, "GPT-3 will be Two Thirds
         | Less," or, "GPT-3 will cost One Third the Price."
         | 
         | This example from a non-native English speaker shows the
         | difficulties of having several possible interpretations, at
         | least three! [1]
         | 
         | [1] https://forum.wordreference.com/threads/%C2%ABx-times-
         | less%C...
        
           | hiccuphippo wrote:
           | I wonder how does GPT-3 understands that phrase.
        
           | sebzim4500 wrote:
           | Ironically GPT-3 probably understands this headline fine.
           | (For a suitably generous definition of understands).
        
         | copperx wrote:
         | It's hard to understand because "cheaper" only makes sense as a
         | relative comparison.
         | 
         | Assuming that if, say, the service was 25% cheaper than the
         | nominal price last month, it is now 0.25 * 3 = 75% cheaper than
         | the nominal price. However, it would make more sense to
         | advertise that as a 75% discount over nominal price, so
         | probably that's NOT what they meant.
         | 
         | That's the only explanation that makes sense arithmetically, I
         | believe. There cannot be another interpretation of "3x cheaper"
         | unless you reinterpret the word "cheaper."
        
       | miohtama wrote:
       | Is this free markets in action, subvention by VC money or both?
        
         | aabhay wrote:
         | It's an indication that OpenAI picked the wrong place to
         | monetize. Picks and shovels are rapidly falling to zero margin.
        
         | itake wrote:
         | My guess is lower pricing makes the product more accessible to
         | more people (and business models).
        
         | thorum wrote:
         | I'm sure those are factors, but it's mainly (1) GPU prices have
         | dropped significantly, and (2) a lot of progress is happening
         | in optimizing large language models to be more efficient while
         | maintaining performance.
        
       | imdsm wrote:
       | Great to see. GPT-3 has been very useful to me, though looking
       | forward to whatever the next version is.
        
       | londons_explore wrote:
       | I'm really surprised they don't charge a very high price with a
       | generous free tier, and then have an army of salespeople who will
       | offer you whatever discount is necessary to make your business
       | model work.
       | 
       | If you want to use GPT-3 to build an AI-lawyer, it'll be very
       | expensive... but to build a tool for customer support for walmart
       | will be cheap, etc.
        
         | gkoberger wrote:
         | Do they need an army of salespeople, or just a finely tuned
         | GPT-3 bot?
        
         | choppaface wrote:
         | The OpenAI CEO Sama is primarily an investor, not an operator,
         | so the model is tuned towards giving small companies an
         | opportunity versus enticing enterprise customers. Sama want
         | somebody to figure out how to run & productive his business
         | versus actual customers. In particular because of efforts like
         | Stable Diffusion--- it's going to be a lot easier for
         | enterprise to build off tech that's _actually_ Open.
        
         | lee101 wrote:
         | If your looking for a free teir that allways is there checkout
         | https://text-generator.io which has 100 free requests a month.
         | The free their really helps people get started with it or run
         | demos etc without needing to commit to anything
        
         | thdxr wrote:
         | making things cheap encourages experimentation and
         | paradoxically increases total spend
        
       | rvz wrote:
       | Or how about opening everything up and releasing all the models
       | as open source?
       | 
       | Since even with these price decreases, lots of competitors are
       | out there with more parameters and are open source, and are
       | already matched or just as good as GPT-3 and are generally
       | running OpenAI's pricing to the ground.
       | 
       | Might as well open everything up since you can't compete with
       | free anyway.
        
         | colejohnson66 wrote:
         | The free alternatives to OpenAI are not as good as OpenAI. For
         | example, Craiyon (formally DALL-E Mini), is good, but it's
         | nothing compared to the real DALL-E.
        
           | simonw wrote:
           | That's true for Craiyon, but Stable Diffusion came out today
           | and looks like it may be able to beat DALL-E in terms of
           | quality some of the time:
           | https://github.com/huggingface/diffusers/releases/tag/v0.2.3
        
           | rvz wrote:
           | > For example, Craiyon (formally DALL-E Mini), is good, but
           | it's nothing compared to the real DALL-E.
           | 
           | Yet Stable Diffusion is already getting open sourced and is
           | just as good, if not better than what I am seeing with DALL-E
           | 2.
        
       | RcouF1uZ4gsC wrote:
       | Just in time for the 2022 Midterms!
        
         | jon_adler wrote:
         | Are any plagiarism detectors capable of recognising generated
         | content? I'm not even sure how they might ever go about it?
        
       | dane-pgp wrote:
       | Is this because GPT-4 is about to be released and they want to
       | charge full price for that?
        
         | layer8 wrote:
         | Maybe demand has slowed and they want to keep it up.
        
           | teaearlgraycold wrote:
           | GPT-J was about the same cost to self-host as GPT-3 (ignoring
           | maintenance costs). They're getting ahead of competitors IMO.
        
         | whoisburbansky wrote:
         | Why wouldn't they just charge more for the better, improved
         | GPT-4?
        
       | lee101 wrote:
       | Also checkout https://text-generator.io which is already 10x
       | cheaper... I think they know text generation is no longer a
       | monopoly any more hence having to move pricing, the openai
       | margins are still really high though honestly.
        
       | RobotToaster wrote:
       | It still bugs me that a closed source company calls it's product
       | "Open"AI.
        
         | [deleted]
        
         | Mimmy wrote:
         | Same. What they're doing is cool but the name is sacrilegious.
        
       | cbm-vic-20 wrote:
       | I lack imagination. What is GPT-3 (and generative text models in
       | general) useful for, other than toys? It can write essays, can we
       | expect it to replace writers for newspapers and magazines? Can it
       | write coherent technical documentation? Replace speechwriters?
       | 
       | Edit: looks like the comment thread below is similar
       | https://news.ycombinator.com/item?id=32554955
        
         | dan_mctree wrote:
         | I've used it a bit to help with creativity. For example, I
         | wanted to brainstorm some unit designs and gameplay mechanics
         | for a RTS game I wanted to build. I prompted GPT-3 with an
         | interview with a game designer where unit designs and mechanics
         | were discussed. For a prompt I used a few questions and answers
         | and then a single question I wanted GPT-3 to answer for me.
         | After a few attempts with slight variations it spat out a bunch
         | of ideas for me. While few of the designs were fully worked out
         | and immediately usable, a couple of suggestions by GPT-3 were
         | able to spark enough creativity to create some cool things.
         | 
         | My favorite reply was:
         | 
         | "A: I think the best example of this is the ability to swap
         | positions with a friendly unit. This ability can be used in so
         | many different ways. You can use it to support an ally, or you
         | can use it to escape a tricky situation. You can use it to
         | surprise an enemy by swapping with a unit they didn't expect,
         | or you can use it to put yourself in a better tactical
         | position. It's a simple idea, but it actually turns the whole
         | game on its head."
        
         | dr_dshiv wrote:
         | It's incredibly good at generating scholarly text. Put in a
         | title and abstract and it will come up with a strong intro that
         | is clear and compelling. It helps me write academic papers more
         | clearly, as a result. Also, great for early stages of grant
         | writing or proposals. It came up with a fantastic conference
         | proposal for me that I spent 20 minutes editing and then just
         | sent, unedited.
         | 
         | It will aid speechwriters or help more people write their own
         | speeches, I guess.
         | 
         | Assuming that AI tech is a person replacement is, IMO, the
         | wrong mental model. It helps talented people be more efficient.
         | And will help a lot of hacks "fake it till they make it." Power
         | to the people, if they can handle it.
        
           | dr_dshiv wrote:
           | I also asked it to help me write a letter to my homesick
           | daughter at summer camp. It was really helpful. I feel that
           | GPT3 has an enviable degree of emotional intelligence, nuance
           | and conceptual understanding. Writing and communicating
           | effectively with people is very hard. Our own emotions get in
           | the way. Now we have technology that can help people be
           | better communicators with each other. It's a big deal.
        
             | slavik81 wrote:
             | When you say you're an empath, do you mean to claim that
             | you have psychic powers? An empath is a being from science
             | fiction with the psionic ability to sense emotions, like
             | Counselor Troi on Star Trek. The term originates from the
             | short story "Empath" by J. T. McIntosh, published in New
             | Worlds Science Fiction #50, August 1956.
        
               | dr_dshiv wrote:
               | Yes, that is correct. I have psychic powers, specifically
               | the psionic ability to sense emotions. AMA
        
               | itsyaboi wrote:
               | When did you first become aware of your powers?
        
               | dr_dshiv wrote:
               | It has always seemed natural (to me) to be able to
               | directly feel what other people are feeling. Most people
               | can. Sometimes, however, it can feel shocking when I
               | realize I'm talking to someone who doesn't feel what I'm
               | feeling. Or, that doesn't feel that I'm feeling what they
               | are feeling.
        
             | donkarma wrote:
             | wtf is an empath?
        
           | matthewmacleod wrote:
           | _It came up with a fantastic conference proposal for me that
           | I spent 20 minutes editing and then just sent, unedited._
           | 
           | Was this comment written by GPT-3?
           | 
           | (That said - are _all_ comments written by GPT-3?)
        
             | layer8 wrote:
             | > Was this comment written by GPT-3?
             | 
             | You should submit that to GPT-3 as a prompt.
        
             | [deleted]
        
             | DaSHacka wrote:
             | I assume what was meant was that they spent 20 minutes
             | editing it, before realizing it was perfect as it was
             | originally and sent the unedited copy.
        
           | blagie wrote:
           | I've started asking GPT-3 for relationship advice. It's
           | often... insightful.
           | 
           | There is a bit of prompt engineering. If you start your
           | question with things like "As a qualified therapist, what
           | would you recommend to" .... There is also a bit of
           | engineering with the question. I often ask about an analogous
           | situation, rather than my own.
           | 
           | I'm not batting 100%, but I often get strategies and things
           | to think about I wouldn't have come up with myself.
        
             | dr_dshiv wrote:
             | > I often get strategies and things to think about I
             | wouldn't have come up with myself
             | 
             | Yes. It's a dialectic. Very powerful for personal
             | reflection.
        
         | Traubenfuchs wrote:
         | It's been out 2 years already and the non-toy application with
         | the biggest mass appeal is the glorified autocomplete / google
         | shortcut Copilot.
         | 
         | Anyone being honest should admit it (was) more hot air and hype
         | than reality.
        
           | dr_dshiv wrote:
           | I think GPT3 is just as amazing as DALLE, but gets less
           | public awareness because the things it creates requires
           | critical reading.
           | 
           | Which makes me think: everyone needs to teach kids to speed
           | read. And develop their bullshit detectors.
        
         | mg wrote:
         | GPT-3 is useful for a lot of things. It can write essays, it
         | can write technical documentation, it can write speeches, it
         | can write fiction, it can write poetry, it can write jokes, it
         | can write jokes about jokes, it can write jokes about jokes
         | about jokes, it can write jokes about jokes about jokes about
         | jokes, it can write jokes about jokes about jokes about jokes
         | about jokes, it can write jokes about jokes about jokes about
         | jokes about jokes about jokes, it can write jokes about jokes
         | about jokes about jokes about jokes about jokes about jokes.
         | 
         | This answer was written by GPT-J when I gave it the following
         | prompt:
         | 
         | question: I lack imagination. What is GPT-3 (and generative
         | text models in general) useful for, other than toys? It can
         | write essays, can we expect it to replace writers for
         | newspapers and magazines? Can it write coherent technical
         | documentation? Replace speechwriters?
         | 
         | answer:
        
           | layer8 wrote:
           | At least it didn't go into an infinite loop. Maybe there's
           | explicit logic to prevent that.
        
             | keepquestioning wrote:
             | What are thresholds used in generative text to stop
             | generating characters?
        
               | layer8 wrote:
               | Must be a bit more sophisticated than that, as it forms a
               | complete sentence.
        
               | stavros wrote:
               | There's a penalty for generating text it's generated
               | before (two penalties, actually, one for words and one
               | for phrases).
        
               | ml_basics wrote:
               | I'm not sure about GPT-3 specifically, but often these
               | models are trained with an explicit "stop token", which
               | the model can output when generating text to indicate
               | that no more text should be produced.
               | 
               | That said, there's no reason it couldn't go into an
               | infinite loop, just that the models are pretty
               | sophisticated and therefore are less likely to do this
               | than earlier generations of models (like predictive text
               | on smart phones in the past)
        
           | jahewson wrote:
           | I had no idea that there were so many jokes about Church
           | encoding.
        
           | moonchrome wrote:
           | >it can write technical documentation
           | 
           | For the love of god please NO.
        
           | nonethewiser wrote:
           | This is hilarious.
        
         | renewiltord wrote:
         | My friend used it to "explain like I'm 5" a medical report he
         | had. I sent that to my dad (a surgeon in that specialty) and he
         | said it was all accurate. I thought that was interesting.
         | 
         | But I think the principal use for me is CoPilot.
        
         | tedsanders wrote:
         | As someone who's used the models a fair bit (I work at OpenAI),
         | I mentally sort applications into four buckets:
         | 
         | - Generative use cases, where you give the model the kernel of
         | an idea and then you curate its output (e.g., blog writing,
         | code completion, etc.)
         | 
         | - Extractive use cases, where you give the model some big piece
         | of text, and then process it in some way (e.g., extract names
         | and addresses, classify it, ask a question about the text)
         | 
         | - Transformational use cases, where you need to fix/adjust a
         | piece of text, or translate from one domain to another (e.g.,
         | sometimes I'll use GPT-3 for little tasks like copying and
         | pasting a table from a presentation and then asking the model
         | to translate it to markdown; saves me a visit to Google and
         | finding some table generator website)
         | 
         | - Comparisons, where you use embeddings to do
         | search/clustering/recommendations over any set of strings
         | (e.g., can combo nicely with the Q&A use case above, where you
         | search over a knowledge base)
         | 
         | I started a repo here with some barebones examples of each:
         | https://github.com/openai/openai-cookbook/
         | 
         | If you're looking for examples of commercial applications,
         | OpenAI published two blog posts highlighting a few:
         | 
         | - GPT-3 use cases (2021): https://openai.com/blog/gpt-3-apps/
         | 
         | - Codex use cases (2022): https://openai.com/blog/codex-apps/
        
           | nicksrose7224 wrote:
           | This is a really nice breakdown -- I made a little side
           | project that falls into your "Extractive" and "Comparisons"
           | bucket: https://www.tripwise.me/
           | 
           | My tools uses GPT-3 to dig through Airbnb reviews to find the
           | "cons" of staying at a particular place and summarize them
           | neatly. You can also ask any question at all in natural
           | language about all the listings you're evaluating.
           | 
           | I'm bookmarking your repo for future reference, this is very
           | useful
        
         | patientplatypus wrote:
        
         | leesec wrote:
         | Write now it's a great writing-assist tool. Eventually all the
         | rest.
        
         | alexb_ wrote:
         | In a few years, when a generative text model that's actually
         | open is made, erotic text will 100% be a main use. But of
         | course people don't like thinking about that...
        
           | SamPatt wrote:
           | I don't think porn is as bad as some people say, but there
           | are downsides, and good quality erotica seems like a
           | healthier alternative.
           | 
           | Whether a human or a computer writes it seems unimportant.
        
         | alecco wrote:
         | Besides all the ideas of well meaning people in HN, I fear
         | GPT-3 (and its competitors) will just bring almost impossible
         | to spot fake Amazon reviews, SEO garbage sites, fake blogs,
         | fake podcasts, etc.
         | 
         | And even worse, now corporations and governments don't even
         | need to have troll farms. Just rent some AI, give it basic
         | directions, and let it loose.
         | 
         | The Dead Internet Theory will become real.
        
           | nicksrose7224 wrote:
           | i think this is a bit pessimistic. You can also use GPT-3 or
           | similar technologies to much more easily train a fine-tuned
           | "real vs fake" review classifier with fewer training examples
           | than was previously necessary.
           | 
           | GPT-3 like all tech is double-edged I think. Has good & bad
           | things about it
        
             | moonchrome wrote:
             | I doubt this will happen - at some point the generated
             | content should be indistinguishable from human written text
             | - and AI can produce it in droves and learn to exploit
             | existing algorithms.
        
           | visarga wrote:
           | Until Google starts to expect better garbage from SEO
           | providers. The bot can simply ask a language model if the
           | page looks well written and truthful. So SEOs will have to
           | spam us with well written useful articles about topics we
           | like.
           | 
           | Maybe Google should require webmasters to mark the automated
           | content with a special HTML tag or attribute. Failure to
           | comply leading to deranking. This would be necessary for the
           | future - to know what data was human generated when training
           | the next model, and to have official reason for penalising
           | spammers who try to pass artificial text as human written.
        
         | kleiba wrote:
         | Language models are few-shot learners.
        
         | [deleted]
        
         | floxy wrote:
         | Maybe someone should ask GPT-3 what it is good for?
        
         | machineleaning wrote:
         | Marketing copy - https://www.copy.ai/ Coding help -
         | https://github.com/features/copilot
        
         | simonw wrote:
         | I wrote a few notes in this comment:
         | https://news.ycombinator.com/item?id=32554955
         | 
         | Here's one that I find personally really interesting:
         | 
         | Imagine you have a disability, and the government agencies
         | responsible for helping you have denied your claim. You need to
         | write them a letter. You don't have much (if any) experience
         | writing letters to government agencies, so you're finding that
         | task challenging.
         | 
         | GPT-3 can write that letter for you, and it will de a really
         | good job of it.
         | 
         | Of course, you need access to GPT-3 and you need to learn how
         | to drive it with prompts, so this isn't necessarily an easy
         | thing. But I think it helps lean in the direction of the kind
         | of benefits people can get from this class of technology
         | outside of just playing with the AI.
        
           | aabhay wrote:
           | How is this different or more useful than providing a
           | template with madlibs-style fill in the blanks?
        
             | dr_dshiv wrote:
             | Because you can tell GPT3 what the concept is -- ie,
             | vaguely what you are going for and it will produce. Waaay
             | more flexible than a template.
        
               | aabhay wrote:
               | Still requires you to have an understanding of the
               | concept, an understanding of how to prompt GPT-3, and how
               | to verify the accuracy/veracity of the claims the AI
               | model makes.
        
               | [deleted]
        
             | simonw wrote:
             | Because providing a template requires someone with
             | knowledge of that particular problem space (in this case
             | "writing letters to government agencies to inquire about
             | disability benefits") to put in the work of designing the
             | template, coming up with likely placeholders and predicting
             | how people might need to use it.
             | 
             | GPT-3 can do a good job of this all today, without any
             | additional training.
             | 
             | An expert could help out by providing a few example
             | prompts, but the result would be a lot more powerful and
             | require a lot less development work to put together than
             | trying to build some kind of placeholder wizard from
             | scratch.
        
               | aabhay wrote:
               | A madlibs is more predictable, requires only one example,
               | easy to understand for people with language issues or
               | disabilities, more regulation-safe as it doesn't spit out
               | unpredictable things.
               | 
               | And the madlibs is free to use, for everyone.
               | 
               | All things considered, GPT-3 is a more of a shibboleth of
               | AI revolution than an actual one. Much better to give
               | models like GPT-NeoX to talented development teams who
               | can really optimize and fine tune it, attach great
               | sampling strategies to it, and create great products from
               | there. The API is just too limited for me to want to
               | build a company atop it.
        
               | simonw wrote:
               | Right, but someone still has to build it.
               | 
               | If no-one has built the madlib template for "a letter to
               | the council asking about my disability benefits" then
               | madlibs being more predictable isn't going to help you.
        
             | [deleted]
        
           | [deleted]
        
       | unixhero wrote:
       | The got their hands on the latest Nvidia GTX cards, aha I see. I
       | hear new cards are on their way.
        
       | dvfjsdhgfv wrote:
       | The so-called "open" in their name is an oxymoron. The whole
       | thing is a trap. We need to work on our own, possibly publicly
       | founded but really open solutions rather than depend on
       | proprietary ones.
        
         | naillo wrote:
         | We really gotta come up with another thing to repeat as the go-
         | to diss than this. I'm all for putting pressure on them to
         | release weights etc but they are a lot more open than e.g.
         | google or meta, and the whole "ClosedAI" is so over-repeated at
         | this point.
        
           | TigeriusKirk wrote:
           | They brought it on themselves. They chose the name OpenAI. It
           | was a terrible choice on their part, and the repeated
           | criticism is a direct result of that choice.
        
             | person3 wrote:
             | I definitely agree, "open" in the name is pretty annoying
             | and wrong. I'm a bit torn though - I tend to think GPT-3 is
             | approaching a level where ML is becoming dangerous. Spam
             | bots + GPT-3 is not my idea of a fun time. So the
             | restrictions they put on usage do seem important.
             | 
             | Regardless of that moral question, I don't think a fully
             | open model on the level of GPT-3 is even possible. Given
             | the required cost to train and the expertise involved, big
             | tech will always be a few years ahead. And it's unlikely
             | they would give it away with how much they invest in
             | creating it. Unless capitalism suddenly ends, I don't see
             | any of the major tech companies parting with state of the
             | art ML.
        
         | naniwaduni wrote:
         | It's not an oxymoron, it's just a lie.
        
         | rvz wrote:
         | It is a known scam executed by the VCs.
         | 
         | Thankfully we have free alternatives to GPT-3 (BLOOM [0]) and
         | DALL-E 2 (Stable Diffusion [1])
         | 
         | [0] https://huggingface.co/bigscience/bloom
         | 
         | [1]
         | https://github.com/huggingface/diffusers/releases/tag/v0.2.3
        
       ___________________________________________________________________
       (page generated 2022-08-22 23:01 UTC)