[HN Gopher] OpenAI and Microsoft extend partnership
___________________________________________________________________
OpenAI and Microsoft extend partnership
Author : hmate9
Score : 390 points
Date : 2023-01-23 14:04 UTC (8 hours ago)
(HTM) web link (openai.com)
(TXT) w3m dump (openai.com)
| rvz wrote:
| 'Extend'
|
| Even capitalized and straight from the horses mouth, but as
| unsurprisingly predicted in: [0]
|
| > * OpenAI will gain further investment...
|
| We already know about ChatGPT Pro since that is a paid version of
| ChatGPT coming soon, meaning that the second prediction in [0] as
| also turned out to be true:
|
| > ...ChatGPT by then will become a paid service
|
| Now we wait until the startups depending on the service start to
| raise their prices.
|
| [0] https://news.ycombinator.com/item?id=34201706
| [deleted]
| ocdtrekkie wrote:
| It's been over 25 years. It's really time to let this one go.
| rvz wrote:
| Yes. 25 Years and Microsoft has gotten much smarter with EEE.
|
| Making the best developer tools for free or at least close to
| free is another form of extinguish and they have just done
| that with OpenAI and GitHub.
| ocdtrekkie wrote:
| Nothing in tech is true for a quarter of a decade.
| Especially not when the guy who said this left the company
| twenty years ago. It's time to put this one to rest.
| Spivak wrote:
| > Making the best developer tools for free or at least
| close to free
|
| The bastards! Making and releasing high quality developer
| tools. We have to stop them! Has MS actually EEE'd anything
| in the last 25 years? I legit can't think of a single
| formerly-open thing that's been Microsofted via their own
| extensions.
| rvz wrote:
| The methods are different but the strategy is the same.
| They have used this strategy with Windows Subsystem for
| Linux (WSL), VSCode + extensions, Winget, Microsoft
| Teams, GitHub, and now OpenAI.
|
| Microsoft can afford to maintain all these offerings for
| free whilst competitors suffocate and struggle to compete
| with their paid offerings.
|
| You can't compete with free, especially when there is a
| company who can afford to run and host the best tools for
| free for a long time.
| linhns wrote:
| Nothing is free. If it's free, you're the product
| stale2002 wrote:
| You are still missing the point.
|
| A free, quality product is great. We all benefit from
| that.
|
| The concern is the last "extinguish" part at the end.
|
| But there is no reason for Microsoft to shut down GitHub,
| or VSCode or whatever.
|
| Consumers just get to have a great free product.
| labrador wrote:
| Nah, tribalism and talking smack about the other tribe is
| part of human nature
| grogenaut wrote:
| Somewhat off topic, does someone know how bing might integrate
| chat gpt into search. Is it to understand the prompt and filter
| results. Taking the question and summarizing it to search the
| index. Is it to summarize all the documents into an index and
| search that. Or to just be like chat gpt is now and use it to
| generate new results from it's knowledge base? I'm trying to
| connect the dots between a generative form like these are and how
| it would influence search in the future. Or is the lucene style
| index search on it's way out in a generative world?
| mirker wrote:
| Easiest way would be to classify the query to go to either Bing
| proper or ChatGPT. Example: "What is today's date?" -> Bing
| "Write a rap song about hippos" -> ChatGPT
| willyyr wrote:
| Related MS announcement with some more details:
| https://blogs.microsoft.com/blog/2023/01/23/microsoftandopen...
| layer8 wrote:
| "Microsoft will deploy OpenAI's models across our consumer and
| enterprise products and introduce new categories of digital
| experiences built on OpenAI's technology. This includes
| Microsoft's Azure OpenAI Service [0], which empowers developers
| to build cutting-edge AI applications through direct access to
| OpenAI models backed by Azure's trusted, enterprise-grade
| capabilities and AI-optimized infrastructure and tools."
|
| [0] https://azure.microsoft.com/en-us/blog/general-
| availability-...
| raajg wrote:
| I asked ChatGPT to write a poem based on this post. Here's what
| it wrote:
|
| ---
|
| Two hearts, two minds, working side by side
|
| In pursuit of a mission to make AI safe and wide
|
| Microsoft's investment allows us to stay true
|
| To our capped-profit company, our mission, and you
|
| From cloud computing to real-world use
|
| We collaborate to ensure our systems are not abused
|
| OpenAI API, Azure's OpenAI Service, and Designer too
|
| We're working side by side to make our visions come true
|
| Together we review, together we synthesize
|
| Together we build, together we realize
|
| The potential of advanced AI that's powerful and safe
|
| Ensuring its benefits can be shared far and wide
| unity1001 wrote:
| > Microsoft's investment allows us to stay true > To our
| capped-profit company, our mission, and you
|
| Priceless. Maybe companies should get AI to write PR releases.
| They say the same thing, except more honestly and funnier.
| scifibestfi wrote:
| Google recently said they are recalibrating the level of risk
| they're willing to take with releasing this technology. OpenAI's
| CEO responded saying
|
| > "recalibrate" means "increase" obviously.
|
| > disappointing to see this six-week development. openai will
| continually decrease the level of risk we are comfortable taking
| with new models as they get more powerful, not the other way
| around.
|
| How does this resolve?
|
| OpenAI not releasing something isn't going to stop Google or
| another player from releasing it. Won't they have to recalibrate
| too or be overtaken? Pandora's box is already open.
| marban wrote:
| Prompt: Create a PR smokescreen in 3 paragraphs with as little
| facts as possible.
| IncRnd wrote:
| Paragraph 1: Our company has recently been at the center of
| some unfortunate rumors and false accusations. However, we want
| to assure our valued customers and shareholders that these
| claims are completely unfounded and without any merit.
|
| Paragraph 2: We are proud of the work we do and the products we
| provide to our customers. We have always maintained the highest
| standards of integrity and transparency in our business
| practices.
|
| Paragraph 3: We ask that the public not be swayed by the
| misinformation being spread by our detractors. We are confident
| that the truth will come out and our reputation will be
| cleared. In the meantime, we will continue to focus on
| delivering the best possible products and services to our
| customers.
| speedgoose wrote:
| You can also do the reverse, taking the long article to extract
| the facts for a toddler:
|
| OpenAI and Microsoft are working together again, Microsoft is
| giving them a lot of money to help them make AI that is safe
| and helpful and they are excited to keep working together.
| rpmisms wrote:
| This feels like an inflection point. Microsoft obviously sees
| huge upside, they're forking over a lot of value for this, and
| they must be expecting huge growth. Maybe Bing will be an amazing
| search engine in a couple years...
| Tempest1981 wrote:
| > Azure's unique architecture design has been crucial in
| delivering best-in-class performance and scale
|
| Is there something unique about Azure, which makes it well suited
| for AI?
| pm90 wrote:
| No. Its all spin. The only uniqueness is the discount and
| preferential treatment they would get. Azure gets a nice
| callout in return.
| MicrosoftShill wrote:
| Azure/O365 admin here. No, nothing special from a technical
| perspective.
|
| The value to OpenAI to go Azure(/Microsoft) exclusive is:
|
| - Money & general financial security forever
|
| - Integration with the largest software suite in the world
| (Windows, O365, Active Directory, etc)
|
| - Rapid push to worldwide enterprise sector
|
| - Rapid push to worldwide gov/mil sector (ChatGPT, coming soon
| to a war near you!)
|
| - Rapid push to education sector
| bfeynman wrote:
| Other than just marketing BS, probably lack of customers who
| use GPUs. Both AWS and GCP have capacity issues. The biggest
| factor is also that Google is a direct competitor and invented
| the technology even, so obviously they will not cannibalize
| themselves by giving google more money to train its own models.
| vxNsr wrote:
| That's funny as the other commenter said, azure has had some
| crazy capacity issues, but maybe this is their benefit, they
| get first dibs.
| rospaya wrote:
| Don't use GCP but I've only had capacity issues on Azure
| since covid hit.
| whimsicalism wrote:
| Interesting. My precovid experience was AWS was much harder
| to get GPU on.
| dharmab wrote:
| You can get really good discounts if you're a major Microsoft
| partner.
| jiggawatts wrote:
| Most of the responses to this are just preconceptions or
| assumptions.
|
| Azure _does_ have unique offerings, such as higher-spec compute
| nodes than AWS and RDMA-capable "HPC" sizes that can be used to
| build mini supercomputers.
|
| AFAIK, no other public cloud has this type of capability.
| tester756 wrote:
| Indeed, it's crazy how people can go out here and claim crazy
| stuff as if they knew internals of Azure
| jiggawatts wrote:
| "I like Linux and Amazon, hence Azure must be bad. I
| assume, I've never seen it." is something I've heard
| commonly.
|
| I have a customer where they use both clouds, and they will
| spend weeks spinning up Rube Goldberg machines in AWS even
| though there is a turnkey solution available in Azure.
|
| For example, Elastic File System (EFS) in AWS is only
| compatible with UNIX-like operating systems, because it is
| NFS v4. Azure has both NFS and SMB file shares as a service
| -- the latter is a unique offering. It's not magic, but it
| does eliminate quite a bit of complexity around managing
| large, scalable file server clusters for Windows clients if
| you need that kind of thing.
| frusky wrote:
| [flagged]
| coliveira wrote:
| It will quickly chain you to Microsoft products, that's the
| uniqueness.
| alex_duf wrote:
| In fairness that's also true for AWS and GCP
| sidcool wrote:
| Yep, the discounts.
| wangii wrote:
| in tech history, one would be screwed in such partnership. the
| question is whom?
| ActionHank wrote:
| Microsoft is.
| TekMol wrote:
| Related (?): Is ChatGPT down at the moment?
|
| When I visit it, I am getting a poem about how it is over
| capacity. And that's it.
| dagw wrote:
| _Is ChatGPT down at the moment?_
|
| No, but ChatGPT is at full capacity which means that it will
| 'randomly' block new connections until load drops. Keep trying
| every 5 - 10 minutes and you'll eventually get in
| drexlspivey wrote:
| You can also pay for the new premium subscription for
| $42/month
| codeisawesome wrote:
| Can't seem to do that while the thing is down... wonder why
| they didn't choose to separate the payment UX from the
| actual product.
| ricopags wrote:
| Frustrated with the same question. The service doesn't
| need to be up to have a "buy pro and get in now" option.
| Seems like they're missing on ready conversions.
| return_to_monke wrote:
| it seems to be constantly that way norwadays. I think they
| might give priority to paying API users, and of course by now
| there has been enough time for applications to implement
| chatgpt stuff.
| Tempest1981 wrote:
| > Azure's unique architecture design has been crucial in
| delivering best-in-class performance and scale
|
| Is there something unique about Azure, which makes it well suited
| for AI? (vs AWS or GCP)
| aabhay wrote:
| They have ten billion dollars
| ThrowawayTestr wrote:
| 2023 is the year of Bing, calling it now.
| davidy123 wrote:
| That really depends on how much the public accepts results
| without sources, aka no credit, aka rampant breaking of social
| norms and copyright, as well as destroying the easy ability to
| verify something. In many ways, OpenAI and anyone who supports
| it are trying to pull an Uber here, but shift the Overton on
| something indescribably larger than transportation licenses.
| They want to Borg global intelligence (though, of course, they
| will be in control).
|
| Say what you will about Google, they generally credit their
| sources. Yes, it's part of their advertising model, but it's
| still a Very Good Thing.
|
| I hope that Google's plan is to release something that
| continues this model. If it's near as good at ChatGPT and
| strongly includes sources, it is the right future.
| spyder wrote:
| OpenAI already has a model to improve factual accuracy and
| provide citations:
|
| https://openai.com/blog/webgpt/
|
| It's probably not too hard for them to tune ChatGPT and the
| upcoming GPT4 that way, and I think it's very likely they
| will do something like that in Bing.
| MerManMaid wrote:
| >In many ways, OpenAI and anyone who supports it are trying
| to pull an Uber
|
| Can you elaborate here? (Honestly asking since I'm not seeing
| the similarity)
| themacguffinman wrote:
| My understanding of that phrase is that "pull an Uber"
| means "break laws and social norms to more quickly deliver
| a product that beats the status quo". Uber broke laws in
| some regions that required special taxi licenses, and it
| broke social norms by blurring the lines between contractor
| and full-time employees when it took away certain employee
| freedoms common for contractors while not giving them full-
| time employment benefits.
| Spivak wrote:
| I think you're assuming that Bing is just going to let GPT
| blindly answer queries, that's not at all how you build a
| system like that.
|
| How it actually works is more like:
|
| 1. User asks "What's the tallest building in the world?"
|
| 2. MS, rightfully, assumes that GPT has no idea what the
| answer to this is. And even if you trusted it to know, it
| will always lag behind and new buildings could have been
| built since then.
|
| 3. MS searches their index for the most relevant document
| snippets related to this query and feeds it to GPT as
| context.
|
| 4. MS asks GPT to answer the question in the context of those
| document snippets.
|
| 5. MS returns the result from GPT along with references to
| the documents it sourced the information from.
|
| This is how the OpenAI /search endpoint used to work.
| davidy123 wrote:
| If it does that, and properly highlights the sources, I
| have nothing to criticize. Though, I think the results
| won't be as good if it doesn't use its entire breadth (if
| it does, the problem reappears).
| SomeDog76 wrote:
| I have the feeling Bing is on the way to overtake Google even
| without OpenAI, recently google results seem to struggle more
| and more. Whenever I am trying to remember a word, if I type
| something similliar sounding google will give me no results at
| all. Bing (I use DDG which serves Bing results) will most of
| the time figure out what I was trying to find.
| mejutoco wrote:
| In my experience, if anything, Google tries too hard to guess
| what I mean, not the opposite.
| lancesells wrote:
| I'm a DDG user as well and I feel it's getting loaded with
| spam links much like Google. Usually my most confident
| searches are watching a YouTube video on what I'm looking
| for. I would prefer text to video but it seems that's where
| we're headed until video becomes cheaper than text to produce
| (unlikely).
| Fostewrs wrote:
| [flagged]
| chabes wrote:
| "Learn to code."
|
| That's what folks were crassly saying to truck drivers, baristas,
| and cashiers.
|
| The writing was/is on the wall: AI is only going to get more
| powerful and able to be applied to more and more complex tasks.
|
| The thought was that "unskilled" labor would be the lowest
| hanging fruit, that automated AI - with some (but minimal) human
| oversight - would replace serious chunks of the workforce in
| various minimum-wage and "blue collar" sectors.
|
| Machines don't necessarily need to sleep, they don't have labor
| unions, or laws that require healthcare or overtime pay. They
| don't get upset, take things personally, seek revenge, or
| reciprocity like a person.
|
| Sounds like that could be a threat to many kinds of jobs, many of
| them "bullshit jobs" (in the words of Graeber), but others as
| well.
|
| It seems (to me, at least) that the more immanent threat is text-
| based AI - with some human oversight - replacing large swaths of
| the tech workforce (many of whom were leading the narrative about
| the truck drivers).
|
| The incentives for companies like Microsoft, Amazon, Google, and
| others, to go this route are obvious, beyond the stated reasons
| why "low-skilled" labor is at risk. They already have enormous
| investments, acquisitions, projects, established platforms, and
| infrastructure related to AI.
|
| I expect to see more partnerships like the one between Microsoft
| and OpenAI, from all of the major tech companies.
|
| I also believe the connection between the
| acquisitions/partnerships and the mass layoffs will become more
| and more obvious as these acquisitions/partnerships continue to
| happen.
|
| I can't be the only person that is noticing this...
| Der_Einzige wrote:
| A lot of very smart people, both within NLP research, and here,
| are in complete denial about what the proliferation of high
| quality LLMs means for their jobs and earning potential.
|
| The only thing which makes me less sad is that I'm pretty sure
| Moravecs paradox is actually not all that real, but is more due
| to the relative lack of engineering interest put on solving
| continuous control problems. Apparently reinforcement learning
| on transformers works now (RLHF in ChatGPT). This implies that
| we should see high effective continuous control models very
| soon. Robots are coming for physical labor, it'll just take a
| bit longer.
|
| Shit man, when I was last in South Korea, I felt like I was
| living in the future. They had many "24/7" drink cafe's where
| it's literally just a robot arm that makes the drink for you
| for a few dollars.
|
| It is painfully ironic to knowledge workers that they are
| destroying their own earning potential, but physical labor is
| not safe. Nothing is safe.
| kranke155 wrote:
| It will take 10 years for folks to realize the
| layoffs/unemployment are AI, 20-30 years for a coherent
| political movement to show up that lays out a post-AI-labour
| society.
|
| In the meantime, rocky road.
| scifibestfi wrote:
| Here's the weird thing though. Bullshit Jobs could already be
| eliminated, yet they are not. Why not? Why would companies wait
| until AI to eliminate what we already know are Bullshit Jobs?
| ilaksh wrote:
| If I'm understanding this, the Codex model like code-davinci-002
| and limited to 10-20 requests per minute on OpenAI, is now
| "generally" available through "Azure OpenAI Service" BUT
|
| - you still have to apply to
| https://customervoice.microsoft.com/Pages/ResponsePage.aspx?...
|
| - you will be rejected if you use a personal email address
|
| - you will be rejected if you are not a "managed customer"
|
| - you will be rejected if.. Microsoft thinks you are a real
| competitor to any of their products?
|
| So my site aidev.codes is will be dead in the water if I can't
| navigate these things or afford the "managed" part. What is that?
| I don't know what a "managed customer" is but I assume its very
| expensive.
|
| I mean, this is really the opposite of Open.
|
| Or am I missing something? Is it really generally available and
| they took all of that stuff off?
|
| BTW I am looking for an investor:
| https://news.ycombinator.com/item?id=34494480
| jacooper wrote:
| > I mean, this is really the opposite of Open.
|
| Cocks gun, _Always has been_
| https://en.meming.world/wiki/Always_has_been
|
| OpenAI was never really open in the first place.
| ilaksh wrote:
| Right.. I know.. have been telling people that.. there just
| doesn't seem to be anything equivalent as an alternative.
| code-davinci-002 is amazing. Way better than Tabnine for
| example (in my limited testing) and also much better that
| text-davinci-003. for code. The open source programming LLMs
| I saw were kind of a joke compared to OpenAI's models.
| ilaksh wrote:
| FYI I asked their sales chat what is a "managed customer".. I
| am not sure if it was really a person or not. Possibly a bot.
| But they never really defined it, but took down my info and
| said a "relevant partner would contact me ASAP".
| bogomipz wrote:
| The post states:
|
| >"Azure's unique architecture design has been crucial in
| delivering best-in-class performance and scale for our AI
| training and inference workloads."
|
| Can someone say what is unique about the Azure cloud architecture
| here vs other cloud vendors?
| evan_evan wrote:
| Right after firing 11,000 employees...
| mritchie712 wrote:
| > Azure's unique architecture design has been crucial in
| delivering best-in-class performance and scale for our AI
| training and inference workloads
|
| translation: Microsoft is not Google (who wishes we'd explode).
| hmate9 wrote:
| This blog post offers no specific detail about the partnership
| (dollars involved and terms). Are those details public somewhere
| else?
| ffhhj wrote:
| They are running at capacity now. Can they scale fast enough?
| ddmma wrote:
| They create the need by building limitations on the free tier.
| bfeynman wrote:
| they aren't running out of capacity, they just can't afford
| 100k+ a day in compute. Number of users who will pay will be
| much less so I doubt they will have problems
| lopis wrote:
| They are running at capacity for free users. They will be able
| to afford much higher capacity with MS money, and probably stop
| providing the service for free soon.
| cactusplant7374 wrote:
| One request costs about 1 cent to process? Is that what
| Altman is saying?
| [deleted]
| dragonwriter wrote:
| They have as many frre users as they are willing to pay for to
| present a free demo; rest assured that the fee for paid users
| is more than sufficient to pay for whatever capacity they need.
| sheerun wrote:
| Your data belongs to us
| londons_explore wrote:
| Let me translate this blog post out of corpspeak...:
|
| OpenAI has agreed to use Azure exclusively and for free, in
| return for Azure massively scaling up it's GPU capacity (which
| OpenAI was hitting before, hence threatening to move). In return,
| OpenAI gives an extensive license to Microsoft to use ChatGPT in
| Bing (and other products).
|
| Now why couldn't the blog post have just said that rather than
| hiding all the details?
| diogopublio wrote:
| I am 100% pro for a clear and direct communication, in fact
| this is what ChatGPT does haha
| MuffinFlavored wrote:
| How many GPUs does OpenAI exclusively use daily/how many did it
| take to train?
| posharma wrote:
| OpenAI threatened to move? Do you've evidence to support this?
| naillo wrote:
| Also as part of this exchange Microsoft will own 49% of the
| company.
| bossyTeacher wrote:
| source?
| mimotomo wrote:
| https://www.semafor.com/article/01/09/2023/microsoft-
| eyes-10...
|
| (over German Golem.de - https://www.golem.de/news/gpt-dall-
| e-copilot-microsoft-soll-...)
| kenjackson wrote:
| What happens to the non-profit aspect of OpenAI as a result of
| this deal? How much money are we talking about, that is, what's
| the scale of the deal? Do you have any proof that building on
| Azure will work for OpenAI?
| funnymony wrote:
| Accounting tricks will make it look zero profit.
| kranke155 wrote:
| I don't even understand how there IS a non-profit aspect to
| OpenAI at this point?
|
| Honestly DeepMind seems to be doing far more important work.
| Health etc.
| dang wrote:
| If someone finds a third party article that reports on the
| significant new information directly, we can switch the URL to
| that.
|
| https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
| bayindirh wrote:
| Because they didn't want to openly say "Microsoft gives us GPUs
| and allows us to use them for free, hence practically buys us.
| In return we almost exclusively license what we produce to
| them, for free."
|
| Because it looks bad. Microsoft wants to be seen in a different
| light, and OpenAI wants to be perceived as an open, academic,
| capped-profit for a noble reason collective.
|
| It's all smoke and mirrors.
| OnlineGladiator wrote:
| > OpenAI wants to be perceived as an open, academic, capped-
| profit for a noble reason collective.
|
| Does anybody actually believe this? Their name has been a
| running joke for years already.
| pkdpic wrote:
| Not to diminish anyone's appreciation for OpenAI's
| increasingly performance-art level Orwellian name but I
| have found myself believing that their preference would
| still be to secure funding that would allow them to live up
| to the implied openness if it was ever an option. Sam
| Altman has stated over and over again that they tried to
| solicit interest / investment from public sources like
| governments before resorting to private capital. Maybe I
| just want to believe that their hearts are in the right
| place but that is the impression my advanced youtube and
| wikipedia research has left me with.
| ShamelessC wrote:
| Sam Altman is also deeply entrenched in the land of
| venture capital, where a CEO's ability to bs is as
| important as their level of media experience.
| rvnx wrote:
| Brother of OpenDNS
| shagie wrote:
| Cousin to OpenVMS, child of The Open Group and TOGAF
| which doesn't mean what you first think it means (The
| Open Group Architecture Framework).
| logicallee wrote:
| You ask "does anyone believe this"? I decided to think
| about whether I believe OpenAI is "open, academic, capped-
| profit for a noble reason collective".
|
| I'll list some pros/contras in no particular order:
|
| CONTRA: I can't download ChatGPT and run it locally. Why
| not? Clearly they are hoarding it.
|
| PRO: They genuinely overtrain toward responses that the
| model judges benefit humanity, to the point of failing at
| the task. I can't think of a specific example but everyone
| knows what I mean. It is a frustration to users and doesn't
| help their image. They seem to be overtraining this way out
| of the goodness of their hearts.
|
| PRO: its non-profit or capped-profit structure would let it
| behave in the way it claims it is behaving. (By contrast, a
| for-profit public company in some sense has a fiduciary
| responsibility to maximize value for shareholders rather
| than benefit for humanity.)
|
| PRO: ChatGPT is available for free and Dall-E gives some
| free credits.
|
| CONTRA: Dall-E doesn't give enough free credits. It might
| have a longer wait time for free users but it shouldn't
| hard cap them at a certain number of free credits per
| month. For example perhaps free users could not be subject
| to any reasonable usage limits (a few hundred images per
| day should be fine) but could have to wait 1 extra minute
| for each one.
|
| CON: its architecture isn't open, it doesn't share how it
| trained its models, it doesn't publish its datasets and
| parameters that would let other companies achieve the same
| results (reproduce their work and have their own version),
| its research is closed off and unavailable.
|
| CON: the company is not directly applying AI to proactively
| solving social problems.
|
| CON: (debatable) some users who are not sources of revenue
| have been banned. From experimentong with a chatbot (not
| any hard legal limits just things like exploring their
| sexuality with the chatbot. It's just a chatbot, it would
| make sense to exclude such conversations from further
| training data or feedback, keep male and female researchers
| at OpenAI from having to review unsavory conversations, or
| even warn or at most temporarily ban the user. An account
| closure is super extreme for natural forms of interaction
| that don't go near legal limits. It does not seem like an
| open company. Alternative viewpoint: on the other hand, we
| keep certain flamebait subjects out of HN. Perhaps having
| certain subjects banable is necessary for their open
| mission at a larger level - specifically, users will stop
| going there for sex since they associate it purely with
| productivity, similar to how we associate HN with
| intellectual curiosity, which is a result of very heavy
| moderation here and without which this place would not
| exist in its form?)
|
| Overall based on the above considerations, I think that
| OpenAI is not "open" but if it started publishing
| downloadable models for anyone to run, I would feel
| differently.
| drdaeman wrote:
| It doesn't have to publish the models, but everything
| necessary to do so (training toolset, code that runs the
| model itself, etc).
|
| I think over 99.999% of people would be totally cool if
| it'd be "Hey, we ran this on a crazy number of GPUs
| feeding it half of the Internet and then some, and got a
| damn fine model you can try on our site. It costed us an
| arm and leg so we can't just share it with everyone for
| free. But if you want your own chatbot and have a spare
| fortune - here's our research and tooling, have fun with
| it. Oh, and here's what we fed it to prevent it from
| spewing some hatred, misinformation and other bullshit -
| a socially responsible thing to do. Good luck."
| amitport wrote:
| The openAI brand is now recognized much beyond the circles
| that know anything about it.
|
| For most people it is "the chatGPT organization" and
| nothing more.
| [deleted]
| linhns wrote:
| I believe ChatGPT spewed this out. No one seems to know
| what's the cap of it so "capped-profit" is crap.
| jefftk wrote:
| The profit cap started out as 100x investment:
|
| _economic returns for investors and employees are capped
| (with the cap negotiated in advance on a per-limited
| partner basis). Any excess returns go to OpenAI
| Nonprofit. Our goal is to ensure that most of the value
| (monetary or otherwise) we create if successful benefits
| everyone, so we think this is an important first step.
| Returns for our first round of investors are capped at
| 100x their investment (commensurate with the risks in
| front of us), and we expect this multiple to be lower for
| future rounds as we make further progress._
|
| https://openai.com/blog/openai-lp/
| atmosx wrote:
| Absolutely. I heard from 6 different ppl in a week, some
| technical others not so much say that openAI is producing
| OS software for free. The use of the word "open" fooled
| lots and lots of ppl who believe their software is open
| source and free.
| ben_w wrote:
| I guess this is going to be Java/JavaScript confusion all
| over again.
| EGreg wrote:
| To be fair, if they open sourced ChatGPT, they'd accelerate
| the utter demise of all internet forums and articles by a
| couple years. As it is we must wait for the likes of
| StabilityAI / LAION to copy them before ushering in the
| dystopian future!
|
| Update: OK maybe Chinese AI companies will do it, they
| don't seem to cooperate with any nonproliferation
| agreements either
| ben_w wrote:
| If GPT family translations are similar quality to a
| Google Translate round trip from English to Chinese and
| back, I don't think it's going to make a massive
| difference to forum quality.
|
| (I'd try it now, but servers are at capacity for me).
| achow wrote:
| Sam Altman addresses that when he was asked that question
| in 'How I Built This' Podcast. Essentially: OpenAI did not
| realize the massive scale they needed for them to be
| successful. When they realized this, they could not raise
| any funding as 'non-profit'. They asked govt, who did not
| want to fund it, and other sources, at the end they did not
| have any other recourse.
|
| Sam Altaman says: 90% of funding was needed for compute
| power, but also was needed for things like buying dataset
| and then to pay employee so that they can compete with
| likes of Google to retain them. If they would not have done
| this, then very soon they would have become irreverent.
|
| So to retain the earlier intent (for greater good) they put
| in bunch of 'safety features' around funding - Ex. 'Profit
| Cap' - after 100X the profit would be distributed to the
| world (in some way). Similarly, there were few others he
| talked about.
|
| The relevant portion starts at 32:39 mark in the following
| podcast: https://open.spotify.com/episode/3oOX1QHLPw9uvLL5L
| mBk28?si=s...
| OnlineGladiator wrote:
| It sounds like you agree with me that OpenAI is poorly
| named.
|
| "We wanted to be open, but we couldn't - we kept the name
| anyway."
| meowkit wrote:
| The argument then becomes how to interpret "Open", not
| that you both agree.
|
| If they keep publishing their research, then OpenAI seems
| like a fine name to me.
| DrBenCarson wrote:
| It's not even open source lol
|
| I get the need to make money but at least contribute OSS
| to the community they're building a product on....
| OnlineGladiator wrote:
| And Russia never invaded Ukraine. You can stretch the
| meaning of a word all you want, it doesn't change the
| fact that the people misrepresenting the word are acting
| in bad faith.
| cardine wrote:
| They've largely stopped publishing.
| make3 wrote:
| as much shit as we can give them for something as trivial
| as their name, their contributions to AI with GPT-1
| (which beat all benchmarks & which led to BERT & was
| basically the "real" BERT itself), GPT-2, which was
| really impressive back then, GPT-3 which was an
| incredibly improvement that people had not see coming,
| PPO in reinforcement learning, reinforcement learning
| from human feedback in NLP, Instruct GPT (the real
| ChatGPT before ChatGPT), and now chatGPT
| mcbuilder wrote:
| It works, I was watching some "normie" youtube creator
| using Chat-GPT in a face-off with him over some domain
| specific question and answers. He was new to the LLM
| thing you could tell, but said how it was an open source
| right in the intro. Probably a common mistake for a
| company with "Open" in the name
| ShamelessC wrote:
| Next level gaslighting.
| joe_the_user wrote:
| I'd just add that Microsoft already owned a significant
| portion of OpenAI through it's various earlier investments
| ($1 billion in 2019).
|
| see: https://openai.com/blog/microsoft/
| [deleted]
| zxienin wrote:
| Just TL;DR such corporate news release with ChatGPT.
| impulser_ wrote:
| Microsoft being late to the party is going to cost them a lot of
| money.
|
| Google and Facebook are the clear leaders in AI and they been
| spending a lot of money building out their AI infrastructure over
| the last decade. You haven't heard much about Microsoft's
| investments other than them buying rights to OpenAI models. Seems
| like they could have saved a lot of money if they started
| investing years ago.
|
| Google bought DeepMind for 400m 8 years ago. You could argue
| DeepMind is on the same level or above OpenAI.
| [deleted]
| rejectfinite wrote:
| With OpenAI + Office 365 search, Bing is actually going to be
| sick. I already like the layout better than Google.
| T3RMINATED wrote:
| [dead]
| bogomipz wrote:
| The WSJ article on this announcement states:
|
| >"OpenAI was in talks this month to sell existing shares in a
| tender offer that would value the company at around $29 billion,
| The Wall Street Journal reported, making it one of the most
| valuable U.S. startups on paper despite generating little
| revenue."
|
| Can anyone say why they wouldn't have gone that route? Is that
| valuation possibly based on market condition from a year ago?
|
| [1] https://archive.is/slxcE
| rmac wrote:
| author: no one
|
| quotes in article: none
|
| dates / timelines for partnership deliverables: none
|
| was this written by chatgpt?
| jay-barronville wrote:
| The "author" appears to be "OpenAI" ( _i.e._ , posting on
| behalf of the company) and they've posted like that numerous
| times before [1].
|
| [1]: https://openai.com/blog/authors/openai
| moffkalast wrote:
| I'll take that as a yes.
| IncRnd wrote:
| I asked a question of chatgpt.
|
| ---
|
| Create a PR release in 10 paragraphs about Microsoft extending
| their partnership with OpenAI, by investing another $3.2
| billion.
|
| Paragraph 1: Microsoft is excited to announce that it is
| extending its partnership with OpenAI by investing an
| additional $3.2 billion. This partnership has been a key driver
| of innovation and progress in the field of artificial
| intelligence, and we are thrilled to continue our work
| together.
|
| Paragraph 2: The investment will be used to support OpenAI's
| ongoing research and development efforts, as well as to expand
| the company's capabilities in areas such as machine learning,
| natural language processing, and computer vision.
|
| Paragraph 3: With this new investment, Microsoft and OpenAI
| will continue to push the boundaries of what is possible with
| AI, driving new breakthroughs and advancements in the field.
|
| Paragraph 4: This extended partnership will also enable
| Microsoft to bring the benefits of OpenAI's technology to its
| customers and partners, helping to drive new business
| opportunities and enable organizations to take advantage of the
| latest AI advances.
|
| Paragraph 5: In addition, Microsoft and OpenAI will be working
| together to promote responsible AI practices and ethical
| guidelines, ensuring that the technology is used in a way that
| is safe, fair, and transparent.
|
| Paragraph 6: The partnership between Microsoft and OpenAI has
| already resulted in some significant accomplishments, such as
| building a language model that can generate human-like text,
| and creating a new system for natural language understanding.
|
| Paragraph 7: With this new investment, the two companies will
| be able to take their collaboration to the next level, creating
| new possibilities for AI applications across various
| industries.
|
| Paragraph 8: Microsoft CEO Satya Nadella said, "We are excited
| to continue our partnership with OpenAI to drive innovation and
| progress in the field of artificial intelligence. This new
| investment will enable us to bring the benefits of AI to more
| people and organizations around the world."
|
| Paragraph 9: OpenAI CEO Sam Altman said, "We are thrilled to be
| continuing our partnership with Microsoft. Their support and
| investment will allow us to continue to make significant
| advances in AI research, and bring the benefits of this
| technology to more people and organizations."
|
| Paragraph 10: The extended partnership between Microsoft and
| OpenAI is a major step forward for the field of artificial
| intelligence, and we are excited to see what the future holds.
| Together, we will continue to push the boundaries of what is
| possible and drive progress in this rapidly evolving field.
| koolba wrote:
| > I asked a question of chatgpt.
|
| > Paragraph 8: Microsoft CEO Satya Nadella said, "We are
| excited to continue our partnership with OpenAI to drive
| innovation and progress in the field of artificial
| intelligence. This new investment will enable us to bring the
| benefits of AI to more people and organizations around the
| world."
|
| > Paragraph 9: OpenAI CEO Sam Altman said, "We are thrilled
| to be continuing our partnership with Microsoft. Their
| support and investment will allow us to continue to make
| significant advances in AI research, and bring the benefits
| of this technology to more people and organizations."
|
| There's something particularly off-putting about machine
| generated quotes attributed to an actual person. It's
| completely different than a person generating the content
| themselves.
| mft_ wrote:
| In companies of Microsoft's size, the quotes are almost
| certainly layers away from (in this case) Satya having any
| input other than final signoff. The communications team
| will have written the text, including the quotes, and it
| will likely have gone through the hierarchy for review and
| approvals.
|
| So genuinely, does it really matter whether the original
| much-reviewed PR-blah quote is written by a junior comms
| human, or an AI algorithm?
| Miraste wrote:
| This is why ChatGPT is so good at corporate language. It
| takes no understanding because it was already 95%
| content-free.
| s1k3s wrote:
| > This multi-year, multi-billion dollar investment from Microsoft
| follows their previous investments in 2019 and 2021, and will
| allow us to continue our independent research and develop AI that
| is increasingly safe, useful, and powerful.
|
| I laughed at independent research. Why do they even bother..
| Kelteseth wrote:
| > We've also worked together to build OpenAI's technology into
| apps like GitHub Copilot and Microsoft Designer.
|
| Good. I get better results from chatGpt than from Copilot. For
| example, often Copilot would invoke a bash command for copying a
| file in my python script, instead of using pythons buildin
| shutil.
| kixxauth wrote:
| Copilot may be one of the most valuable early use cases for
| chat AI.
|
| I've been brushing off AI as overhyped, but this is very
| compelling. I believe the real crux of software engineering is
| thinking about the problem and organizing solutions today which
| can be changed/improved/iterated in the future. Programmers too
| often overweight the time it takes to type things (using short
| variable & function names or creating terse 1 liners). But if
| our objective is to make code easy to change/improve/iterate in
| the future, then it needs to be readable now.
|
| The nice autocomplete features in most IDEs have been a huge
| win to productivity along with Google search. I think chat AI
| could be an order of magnitude improvement.
| revskill wrote:
| Agree. At least it could help me read the library code and
| automate some tasks for me, like how to get some things from
| base class from library .
| brookst wrote:
| Copilot may be amazing, but I am having a hard time adapting.
| Writing code I'm in flow, and reviewing/debugging I'm in
| flow, but the constant appearance of suggestions that I need
| to evaluate for keep / fix / discard while writing is messing
| with my focus.
|
| This might just be the equivalent of the old guy complaining
| that typewriters are better than word processors.
| oersted wrote:
| Is it public knowledge yet how much is the new "multibillion"
| investment?
| ddmma wrote:
| GTP3 can generate some good press releases.
|
| I presume that by integration of the Open AI services into the
| bing and MS Office, Microsoft's mission (empowero every person
| and every organization on the planet to achieve more) makes lot
| of sense.
| adpirz wrote:
| For people more well versed in this space: does GPT / OpenAI more
| broadly have a meaningful moat? It seems like there will be a
| number of these models out there and this isn't as great as say
| Google's up-till-now advantage in search relevancy.
| dandiep wrote:
| A couple points which I don't see elsewhere:
|
| 1) They have the best quality model. Better quality means more
| users. More users means more data. Which means higher
| quality...
|
| 2) operationalizing & scaling these these models is non-
| trivial. I'm not sure what the state of distillation/pruning is
| for GPT-3, but I imagine they have figured out some proprietary
| techniques.
|
| 3) It's not just publishing a single model, but making it so
| people can fine tune and push their own. Because they've gotten
| good at 2, now anyone can create their own version of GPT
| customized for their use case.
|
| Will Google or others be able to do the same eventually?
| Definitely.
|
| The point I'm more making is that it's not just training the
| model and running it.
| echelon wrote:
| > but making it so people can fine tune and push their own
|
| How are they making it easy for people to fine tune their
| own?
| Spivak wrote:
| https://beta.openai.com/docs/guides/fine-tuning
|
| You can build your own model based on GPT in a way that
| users don't have to be in the weeds of AI research to do.
| whimsicalism wrote:
| I don't view any of those things as a meaningful moat against
| the other companies with AI labs.
|
| Specifically, training data is not primarily coming from
| interactions with model. While with RLHF this data might
| become more important, it is still a very small portion.
| ben_w wrote:
| I don't know either way, but by way of example that it
| might be, the Google PageRank patent has expired, yet
| Google remains valuable because their personalisation of
| results became a moat.
| bamboozled wrote:
| If it was "open" it should not need a moat, nor have one.
| thethimble wrote:
| It seems like the fine tune dataset to go from GPT -> ChatGPT
| is pretty valuable, particularly because it is proprietary.
|
| Still, I agree with your characterization that we should see
| many similar models over time. As an example, see Deepmind's
| Sparrow: https://www.deepmind.com/blog/building-safer-dialogue-
| agents
| lmeyerov wrote:
| Yes & No.
|
| GPT <> ChatGPT: probably not. It's not hard for other big
| players to enter this space. It's mostly egg-on-face for
| Google that they haven't given that Google basically invented
| the model that OpenAI uses and has big versions internally.
| There's nothing fundamental stopping Google Docs from adding
| ChatGPT to their UI and getting way more consumer training
| data than OpenAI can get without a similar play, or for Apple
| to do something. Similar to what happened with mapping
| software, google/microsoft/azure & chinese equivs will all
| offer with similar competitiveness, and then complements like
| facebook/salesforce will do more OSS to compete against.
| That's already begun.
|
| Copilot: The interesting proprietary advantage IMO is program
| synthesis. It's really enabled by Microsoft VSCode <> Github
| <> OpenAI. Without even doing any AI investments, the winner
| of this fight might be Gitlab, as Google/AWS/Saleforce/etc
| decide what to do. Before gitlab might have been a nice
| vehicle for shift-left sales (cloud hosting, security scans,
| ..), but program synthesis UIs can make Software 2.0 real.
| mejutoco wrote:
| > There's nothing fundamental stopping Google Docs from
| adding ChatGPT to their UI and getting way more consumer
| training data than OpenAI can get without a similar play.
|
| OpenAI could get exactly the same (or more, idk) data by
| integrating into Teams, considering the Microsoft
| partnership.
| lmeyerov wrote:
| Totally!
|
| My point is chatgpt _isn 't_ a high-moat advantage for
| text/q&a for microsoft. Their top competitors here have
| similarly huge UI footprint. In contrast, program
| synthesis has a much higher data moat.
| whimsicalism wrote:
| There are definitely more people using Docs than Teams.
|
| I doubt that Microsoft will allow OpenAI to train on
| teams data from other businesses.
| mejutoco wrote:
| You might be right, do you have a source?
|
| They are fine with tons of telemetry and candy crush ads
| on the start bar. There were also other instances were
| Microsoft shared data before Google.
|
| In addition to that, one could argue they already share
| date from businesses source code with copilot.
| whimsicalism wrote:
| They don't share private GitHub data with copilot. Teams
| data is default private.
|
| Teams has 270 million monthly users (you can Google it,
| I'm looking at a geekwire post) and Google has 2 billion
| monthly g suite users (business insider)
| ianbutler wrote:
| No I don't believe they do, productwise. We'll see soon enough
| I imagine. The thing is even though I don't think they have a
| moat in terms of model/product. They have a moat in terms of
| talent and capital. Only a few teams operate at their scale and
| sophistication, and it's hard to get there.
|
| I view this as Microsoft paying for talent the same way
| DeepMind was initially integrated into Google, and at the same
| time making the bet that this space will continue to be
| immensely valuable and relevant going forward.
|
| Pretty exciting times all things considered!
| whimsicalism wrote:
| There are a lot of finicky things that go into training a model
| as large as this.
|
| But that knowledge will disperse and is already held in many
| competitor companies. I do not think that OpenAI has a
| substantial moat here.
| muskmusk wrote:
| I think that if you could conclusively answer that question you
| would be sipping drinks on a beach somewhere. The people who
| are investing seems to think so. Also the applications of this
| tech is broader than search, but still includes it. A company
| that had a serious chance of eating at Google's search revenue
| while also generating new revenue streams. What is that worth?
| What if you already have 1000 products that would benefit from
| the new capabilities? This is probably an easy investment
| decision even if Microsoft gains nothing from the actual
| investment itself.
| lonelyasacloud wrote:
| Training and the guard rails.
|
| Beyond that, if it becomes built into the (MS) tools that
| people are using then convenience is going to be a very hard
| barrier for Google (or anyone else) to overcome.
| jasondigitized wrote:
| G-suite?
| ttul wrote:
| Google will continue to integrate their own LLMs into their
| office suite. Microsoft needs OpenAI because their own LLM
| research hasn't been as fruitful. I don't see a huge moat
| here for Microsoft.
|
| Then again, Microsoft's office software is the "gold
| standard" (however poorly deserved) and even with amazing AI
| features, Google's stuff lacks in important ways that will
| keep Microsoft in a strong position with or without AI
| features.
| ruune wrote:
| Google still has it's own platforms. If we take a look at
| last generation consumer AI's, voice assistants, Google
| definitely beat Microsoft, and not only because Cortana
| sucked.
|
| Microsoft may dominate the AI market for office stuff soon,
| but for general purpose language models Google still has a
| great shot, especially when it comes to mobile platforms
| partiallypro wrote:
| The problem is that Google's model relies almost entirely
| on advertising...and AI will simply be almost impossible to
| wrap into that model. Microsoft doesn't really have that
| handicap.
| [deleted]
| theropost wrote:
| I recently wondered if one of the reasons for Google shutting
| down Stadia, was to quickly ramp their GPU server stockpile to
| redirect the resources at GPT modelling, to help catch up.
| whimsicalism wrote:
| Google is not constrained by GPUs here and likely will train
| on TPU pods anyways.
| theropost wrote:
| Good point, I imagine they would be using those as well.
| Know of any resources for speed comparisons on similar
| models?
| echelon wrote:
| Great observation.
|
| If it wasn't prescient, it was incredible dumb luck.
| eternalban wrote:
| This technology is powerful and disruptive enough to merit social
| oversight. We get caught up in the wonder of the novel mechanisms
| and philosophical wanderings, whereas those who have the means to
| field this technology are effectively creating _facts on the
| ground_ as we muse about it.
|
| As just one dimension, let's briefly review education limited to
| testing. Options here range from (paid) cooperation between
| institutions and companies, to marketplaces for detection
| technology and testing systems, to complete rethink of testing at
| the institutional level, to regulatory intervention.
|
| The dynamics and outcomes resulting from considering the above
| _before_ or _after_ widely available and improving LLMs are a
| fact of life, are possibly very different. We haven 't yet
| crossed the Rubicon here and not sitting as captive spectators in
| a theater of ultra rich remains an option.
| jacooper wrote:
| Microsoft is handling the AI revolution extremely well.
|
| They see the potential brand damage that comes with a such a
| product, so they probably just hold less than 50% of the company,
| to be able to say "this is not us".
|
| Which then allows openAI to experiment and wow people, while
| google is anxious about showing 10% of its crazy AI capabilities
| to avoid potential PR disasters.
| cableshaft wrote:
| What are you talking about? All you have to do is make an A.I.
| robot dance and crack wise, and your golden. At least that's
| what the game Borderlands taught me.
| moffkalast wrote:
| Ah yes Borderlands, patenting the practical method of keeping
| superhuman AI in check.... stairs.
| capableweb wrote:
| I was 100% sure you were talking about Boston Dynamics before
| that last sentence.
| consumer451 wrote:
| I somewhat recall their CEO saying something like "my job
| is to get YouTube views" in an interview.
|
| edit: To clarify, I _think_ he was only half-serious.
| moffkalast wrote:
| Now I'm curious what the hell is the CMO's job then.
| jacooper wrote:
| Make it possible
| moffkalast wrote:
| Nah that's the CTO.
| jillesvangurp wrote:
| Agreed. MS has managed this extremely well. They've gone from
| Tay bot a few years ago to this.
|
| 1) they've recognized that none of this could emerge out of a
| corporate research department of a major trillion $ company.
| Just too rigid and constrained by day to day business concerns
| (e.g. Tay bot)
|
| 2) they recognized the strategic value to their business
| regardless of that risk (can't afford to let some more nimble
| player to leap frog them)
|
| 3) they identified a key startup partner that can get results
| here quickly and bank rolled them (OpenAI)
|
| 4) At the first hint of success, they consolidated early
| investments and are now building out a productive partnership
| with that partner.
|
| Doing so, they minimize their risk while keeping their options
| fully open and getting a decent ROI in the process.
|
| Contrast that with Google, which identified the same
| opportunity many years ago, declined to partner and insisted on
| doing everything in house, is also a trillion $ company, and is
| indeed not getting much of note done so far. Google is failing
| hard here. They are executing the strategy that Microsoft
| abandoned after Ballmer that wasn't working at all for them.
|
| Which was to be highly secretive, ignore outside innovation,
| and keep on betting on results from internal R&D instead, all
| while repeatedly failing at that. So, Google is now saying that
| they could have done this or that, better, faster, cooler, etc.
| Except they didn't and haven't. Could have; would have; should
| have; doesn't count here.
|
| This might just be the wake up call that Google has needed for
| a few years. Or not. We'll see. Time to put up or shut up for
| them. My guess is that they will still be figuring out how to
| respond to this for some time and are completely paralyzed by
| current events. The prudent thing for MS would be to not wait
| for that and move full steam ahead. Opportunity of the decade
| for them to grab some market share. This announcement is a good
| sign that they are doing that.
| mirker wrote:
| Agreed. GitHub and OpenAI are the current branding. Though they
| did have some twitter bots go bad years ago and maybe they
| learned from that?
| epups wrote:
| In theory, Google has an even better spin-off for AI
| (Deepmind).
|
| I think slowly Google is becoming more and more like IBM, a
| slow-moving giant that doesn't like risks.
| jacooper wrote:
| But Google outright bought Deepmind, that's the difference.
|
| And its well known, many news titles mention Deepmind
| alongside google, if not as "Google ai scientists".
| la64710 wrote:
| > So, we've partnered with Microsoft to deploy our technology
| through our API and the Azure OpenAI Service -- enabling
| enterprise and developers to build on top of GPT, DALL*E, and
| Codex.
|
| I wonder how much of choice will enterprises or large customers
| have in deciding what data to not send to Microsoft to train
| their models. If OpenAI is tightly integrated to all Microsoft
| products this may be a real concern.
| alberth wrote:
| Model Governance?
|
| Correct me if I'm wrong, but it seems like some type of 3rd party
| Model Governance should be happening. Is it?
| ckastner wrote:
| > _In pursuit of our mission to ensure advanced AI benefits all
| of humanity, OpenAI remains a capped-profit company and is
| governed by the OpenAI non-profit. This structure allows us to
| raise the capital we need to fulfill our mission without
| sacrificing our core beliefs about broadly sharing benefits and
| the need to prioritize safety._
|
| All this time, I was entirely unaware of this.
| PedroBatista wrote:
| I get that these things cost a huge amount of money and there's
| a "lot of opportunity" ( aka make money and influence ) and I
| don't have many problems with that, except when the signal vs
| scumbaggery becomes too much.
|
| But what I really hate about this whole OpenAI thing is their
| chosen path to have their cake and eat it too. Sam Altman seems
| to be something like the love child of Musk and Zuckerberg and
| one of the main traits is their lack of honesty.
|
| Satya Nadella is.. Satya Nadella, there's a reason he was
| chosen to be the CEO of Microsoft, and while I enjoy seeing the
| Google demi-gods squirm, this whole OpenAI non/capped/profit
| thing stinks and I really don't see anyone involved capable or
| having the character to be something better than the current
| tech oligarchy.
| ProAm wrote:
| I take this in the same vein as "Patagonia Founder Donates
| Company to Charity" and view it as a clever shell game.Mostly
| because Im cynical and have watched SV/VC game way too long to
| be healthy.
| jrochkind1 wrote:
| Oh, I think it's way more of a marketing scam than the
| Patagonia thing, which I think was kind of legit. This one
| doesn't even _sound_ legit, even if they do exactly what they
| say... which is almost nothing. they aren 't even really
| saying they'll do anything different with regard to profit.
| Patagonia, I think the founder and his heirs really have
| given up lots of profit they could have had, immediately, to
| dedicate it to other causes. (I think?) Nobody's given up
| anything here.
| mxkopy wrote:
| The Patagonia thing was not legit at all according to Adam
| Hates Everything:
| https://www.youtube.com/watch?v=0Cu6EbELZ6I
|
| EDIT by "not legit" I mean "not authentic"
| mardifoufs wrote:
| That's a usually pretty unreliable source. Especially
| since he doesn't seem to know a lot about fiscal/tax
| laws, in a country he doesn't even live in.
| ProAm wrote:
| https://fortune.com/2022/09/16/patagonia-founder-legal-
| tax-l...
| jefftk wrote:
| Adam's argument seems to be:
|
| 1. If instead of donating the company he had left it to
| his kids he would have paid a lot in taxes (2:58)
|
| 2. He donated the voting shares of the company to a 501c4
| that will remain controlled by his family and is allowed
| to lobby the government (4:10)
|
| 3. Normally when you make a donation you're giving up
| influence over what happens after that (4:25).
|
| 4. Other billionaires do other things (rest of video)
|
| But #3 isn't actually true: any of us can donate to a
| donor-advised fund, which will let us later choose what
| charity we want the money to go to. This is a good idea
| if you want to donate but haven't decided where to donate
| yet, or want to fund opportunities that aren't available
| yet. They did it through new organizations instead of
| opening an account at Fidelity, but it's the same thing
| other than the scale. I wouldn't call your donations "not
| legit" for using a DAF.
|
| Similarly, Adam sort of implies that #2 was tax-
| deductible, but donations to a 501c4 aren't. They had to
| pay tax on those shares based on their fair market value.
|
| Overall, I don't see how this makes the donation no
| longer "legit" or "authentic"? By making the donation he
| has given up almost all of the benefit of having that
| money: he can't spend it for the benefit of himself or
| his descendants anymore. It can't buy them yachts, fancy
| houses, etc. Instead, they have to use the money to
| benefit others, which is why we give a tax break for it.
| mxkopy wrote:
| All I can say is that once you get to a certain level of
| wealth, money really is just a means to an end. The fact
| that they have less doesn't matter if they get their
| ends, one of which in this case is substantial influence
| over whatever organization they set up with this money.
| jrochkind1 wrote:
| Yeah, ok. I mean I agree with his overall point that we
| shouldn't be like "oh hooray for the kind-hearted
| billionairres", there is something in it for them in how
| they have chosen to donate it of course. And they are
| still pretty darn wealthy already -- his kids probably
| still won't need to work, from existing already extracted
| profits.
|
| But that does seem a lot more real than the OpenAI
| shenanigans, they have actually done _something_ , and
| they have given up being even more fabulously wealthy
| than they are already, even if they still have direction
| over how the money is used, including lobbying -- both
| for climate change, but ok, let's say also for things
| that benefit them.
|
| They've still done _something_ , unlike the OpenAI thing
| which seems like giving up some future hypothetical
| probably wouldn't happen anyway profits, and making no
| difference at all for the foreseeable future -- no
| difference but PR advantage.
| ProAm wrote:
| > Overall, I don't see how this makes the donation no
| longer "legit"?
|
| It not necessarily not legit, but he managed to keep a 3B
| business under family control and bypass paying around
| 700M in taxes in doing so. So the altruistic messaging
| that it was donated to save the world is mildly two-
| faced. There is nothing wrong with it, but the news
| stories did leave out a few of the details. I only bring
| this up because the message about OpenAI being structured
| in such a way doesn't pass the smell test knowing SV/VC
| and the key players involved. Again I admit Im cynical
| and can very well be wrong, it also doesn't affect my
| life so why do I care, but I bring it up for conversation
| on HN because I feel its fair to discuss it and the
| possibilities. [1]
|
| [1] https://fortune.com/2022/09/16/patagonia-founder-
| legal-tax-l...
| jrochkind1 wrote:
| Thank you, I'm interested in this!
|
| If anyone has a good written text account critical of the
| Patagonia thing, I'm interested in that too; video isn't
| my preferred consumption format.
| unicornmama wrote:
| Right, like all those cryptocurrency companies structured as
| non-profit foundations in Switzerland.
| gen220 wrote:
| If you want to develop a healthy counterpoint to that
| cynicism, you should consider reading more about Patagonia.
| Speaking as somebody who's generally cynical about these
| moves as well.
|
| I'm less read-up on OpenAI. It does feel to me like they've
| diluted the original non-profit/openness mission to the point
| of it being an interesting historical quirk, rather than an
| ongoing, guiding focus.
| ProAm wrote:
| > you should consider reading more about Patagonia
|
| I'd be open to reading more. I think they are largely a
| good company but feel like this move was more of a tax
| dodge and to ensure generational wealth than for altruism.
| But I am open to being wrong about it.
|
| But everyone involved in OpenAI doesn't give me warm and
| fuzzy feelings at all. I admit it's largely cynicism until
| I know more. But there isnt enough time in the day to do
| real research on every subject and topic that comes up on
| HN and every other discussion board I participate in, so
| it's difficult to be knowledgeable about everything and
| still maintain a life. And even if I was read up on OpenAI,
| I have zero ability to do anything about it, and it likely
| won't affect my life in a meaningful way as well (and this
| is true about most everything I read about to not single
| out OpenAI as not being worth my time). So it is a little
| pointless or more of a time waste I admit.
| ethanbond wrote:
| Company with decades of going way, way out of norms to
| operate as an ethical organization at virtually every
| level does an additional virtuous task: oh ya that's a
| tax dodge
| ProAm wrote:
| It is, Im not saying they are 100% evil like some
| companies we discuss on HN. But this was a tax dodge. [1]
|
| [1] https://fortune.com/2022/09/16/patagonia-founder-
| legal-tax-l...
| ethanbond wrote:
| What do you think that proves? Yes, it does lower your
| tax bill to give away gains before they're realized.
| That's because you do not realize your gains. That's
| called simply "choosing to earn less money."
|
| Tax dodge would imply the _purpose_ is to reduce your tax
| bill and still see the upside. There 's literally no
| evidence of that. All the upside he/his family sees will
| continue to be taxed at the normal rate.
| gen220 wrote:
| If you're Yvon Chouinard, whose goal is to keep Patagonia
| going in perpetuity as a funding vehicle for
| environmental activism, what else could you do?
|
| For the record: he divided up the shares into voting and
| non-voting. He "donated" the dividend-earning shares into
| a 501c4 foundation whose mission is to invest in grass-
| roots environmental activism, and "donated" the voting-
| power shares into a separate trust, whose objectives are
| to ensure that Patagonia continues on the path he gave as
| an example for the previous decades and to hold the 501c4
| accountable.
|
| The boards of these organizations are composed of the
| people whom he most trusts to fulfill his vision, a group
| of people that includes his children.
|
| Knowing the full context of their lives, it's hard to see
| it as anything other than one of the more simple
| solutions to a complicated problem.
| Closi wrote:
| The cap is 100x, so assuming Microsoft is investing billions in
| a current valuation of $29 billion as rumoured, the cap will
| only really come into place once OpenAI becomes the most
| valuable company in the world.
| alkjsdlkjasd wrote:
| Article about it here:
| https://techcrunch.com/2019/03/11/openai-shifts-from-
| nonprof...
|
| > Profits emerging from the LP in excess of the 100x
| multiplier go to the nonprofit, which will use it to run
| educational programs and advocacy work.
|
| > The board [of the non-profit] is limited to a minority of
| financially interested parties, and only non-interested
| members can vote on "decisions where the interests of limited
| partners and OpenAI Nonprofit's mission may conflict"
| lonelyasacloud wrote:
| Google apparently has a market cap of around $1.200 Trillion
| based [0] largely on (for 2021) revenue of 256.7 billion U.S.
| dollars, of which 209.49 billion U.S. dollars came from
| advertising [1]. It's apparently fourth on the list of
| valuable companies [4]
|
| If OpenAI takes a good chunk of Google's ad revenues then it
| doesn't seem that fanciful that it'll be up toward the top of
| market caps.
|
| [0] https://companiesmarketcap.com/alphabet-google/marketcap/
| [1] https://www.statista.com/statistics/266206/googles-
| annual-gl... [2] https://businessplus.ie/news/most-valuable-
| companies/
| tyre wrote:
| How would OpenAI take a majority of AdWords inventory?
| Maybe it could write the ads but you're paying for
| placement.
| llampx wrote:
| Placement on what? Search results that no one is using
| anymore?
| whimsicalism wrote:
| > If OpenAI takes a good chunk of Google's ad revenues then
| it doesn't seem that fanciful that it'll be up toward the
| top of market caps.
|
| OpenAI taking a large chunk of Google's ad revenue seems
| fanciful to me
| lonelyasacloud wrote:
| Google's CEO signaling a code red and inviting the
| founders back is not about Chrome's market share.
| napier wrote:
| More likely they bork a large chunk of Google's ad
| revenue by making information search and retrieval usable
| again under a UBI rationing to fast but not cheap tiered
| freemium model. That's before you consider information
| generation, process management greasing and problem
| solving potential use cases.
| whimsicalism wrote:
| They have no moat and people don't like to wait for
| results.
|
| People like ad subsidized things, that is why they have
| ads rather than people paying for things.
| drexlspivey wrote:
| How does this cap work in practice? If I bought shares at $1
| and someone wants to buy them from me at $200 what happens?
| dorkwood wrote:
| They keep it, but it goes to the non-profit arm of their
| business.
|
| "But any returns beyond that amount... are owned by the
| original OpenAI Nonprofit entity."
|
| https://openai.com/blog/openai-lp/
| drexlspivey wrote:
| > The fundamental idea of OpenAI LP is that investors and
| employees can get a capped return if we succeed at our
| mission
|
| Sorry I still don't get it. If a private equity investor
| has shares and another investor wants to buy them off of
| him at 200x they can do that right? Are they obliged to
| give any excess returns to the non profit? Can't they
| just sell the shares at 50x and then buy them back
| (perhaps through some other entity) to get around that
| trivially?
|
| Or does this refer to return from dividents?
| whimsicalism wrote:
| My guess is it likely has to do with dividends.
|
| But if your returns from the stocks are capped at 100x
| your share value, an efficient market would mean you
| share value never grows 200x.
| Throwaway045814 wrote:
| It's not just about that. Perhaps there are benefits in
| having control of the company that make the shares more
| valuable than just the profit would make it out to be.
| Perhaps there's prestige in owning these shares.
| [deleted]
| tnel77 wrote:
| Since OpenAI isn't publicly traded, I don't think it's an
| issue.
|
| If they were to go public, rather than being purchased by
| Microsoft, I'd guess that this cap would go away. Wall
| Street isn't know for caring about poor people.
| charcircuit wrote:
| >Any excess returns go to OpenAI Nonprofit.
|
| https://openai.com/blog/openai-lp/
| brookst wrote:
| Market cap is not a measure of past profits.
| RegW wrote:
| or future?
| hsavit1 wrote:
| so pretty much another marketing stunt / scam
| timf34 wrote:
| 100x on profits* just for clarity
| ckastner wrote:
| It must have been in the first round [1], but leaves open the
| question of whether this is still the case:
|
| > _Returns for our first round of investors are capped at
| 100x their investment (commensurate with the risks in front
| of us), and we expect this multiple to be lower for future
| rounds as we make further progress._
|
| [1] https://openai.com/blog/openai-lp/
| roflyear wrote:
| Isn't this a wild cap? I'm not an expert but I am aware of
| private deals that are less than 10x for similar
| structures.
| Kuinox wrote:
| If they reach AGI, or more simply replace a chunk of
| workers with AIs, this isn't far fetched to reach these
| numbers.
| belter wrote:
| Oh please no...not the Tesla AutoPilot story again.
|
| These are basic language models easy to reproduce where
| the only barrier to entry is the massive computational
| capacity required. What is OpenAI doing that Google and
| others can't reproduce?
| mattnewton wrote:
| Apparently shipping without fear - google had a lot of
| the fundamental research happen at google brain and
| developed a LLM to rival gpt and a generative model that
| looks better than DAL-E in papers, but decided to show no
| one and keep them in house because they haven't figured
| out a business around them. Or something, maybe it's fear
| around brand damage, I don't know what is keeping them
| from productionizing the tech. As soon as someone does
| figure out a business consumers are okay with they'll
| probably follow with ridiculous compute capacity and
| engineering resources, but right now they are just losing
| the narrative war because they won't ship anything they
| have been working on.
| napier wrote:
| Except unlike self driving cars they're repeatedly
| delivering desirable, interesting, and increasingly mind-
| blowing things that they weren't designed to do that
| surprise everyone including their makers i.e zero shot
| generalised task performance. Public awareness
| propagation of what unfiltered large models beyond a
| certain size and quality are capable of when properly
| prompted is obscured in part by the RLHF-jacketed
| restrictions limiting models like ChatGPT. There's
| relatively little hype around the coolest things LLMs can
| already achieve and even less than a minute fraction of
| surface potential has so far been scratched.
| roflyear wrote:
| I think a lot of people are misunderstanding what I
| meant. I meant that it is really high for a business that
| is marketing themselves as non-profit. I have seen
| similar structures that are like 10x profit caps, which
| seems reasonable. 100x is a lot of ceiling.
| logicchains wrote:
| If they reach AGI, the AGI isn't necessarily going to be
| happy to work for free.
| lma21 wrote:
| Do you think AGI will care about wealth at all (whenever
| this happens)?
| flir wrote:
| Wealth buys compute cycles (also paperclips).
| freejazz wrote:
| why wouldn't it?
| ben_w wrote:
| Wealth isn't the same thing to all people, wealth as
| humans define it isn't necessarily going to be what a
| superintelligence values.
|
| The speed difference between transistors and synapses is
| the difference between marathon runners and continental
| drift; why would an ASI care more about dollars or
| statues or shares or apartments any more than we care
| about changes to individual peaks in the mid-Atlantic
| ridge or how much sand covers those in the Sahara?
| freejazz wrote:
| Wealth doesn't have to be the same thing for everyone for
| someone to care about. That's evident already because
| some people care about wealth and others don't.
|
| What does the speed difference of transistors have to do
| with anything? Transistors pale in comparison to the
| interconnection density of synapses, yet it has nothing
| to do with wealth either...
| ben_w wrote:
| Everything you and I consider value is a fixed background
| from the point of view of a mind whose sole difference
| from ours is the speedup.
|
| I only see them valuing that if they're also extremely
| neophobic in a way that, for example, would look like a
| human thinking that "fire" and "talking" are dangerously
| modern.
|
| > Transistors pale in comparison to the interconnection
| density of synapses
|
| Not so. Transistor are also smaller than synapses by
| about the degree to which marathon runners are smaller
| than hills.
|
| Even allowing extra space for interconnections and
| cheating in favour of biology by assuming an M1 chip is a
| full millimetre thick rather than just however many
| nanometers it is for the transistors alone, it's still a
| better volumetric density than us.
|
| (Sucks for power and cost relative to us when used to
| mimic brains, but that's why it hasn't _already_ taken
| over).
| knodi123 wrote:
| depends on how it's grown. If it's a black box that keeps
| improving but not by any means the developer understands,
| then maybe so. If we manage to decode the concepts of
| motivation as pertains to this hypothetical AGI, and are
| in control of it, then maybe no.
|
| There's nothing that says a mind needs an ego is an
| essential element, or an id, or any of the other parts of
| a human mind. That's just how our brains evolved, living
| in a society over millions of years.
| fragmede wrote:
| Depends on how opaque the box that holds it is. If we
| feed the AGI digital heroin and methamphetamine, it'd be
| controllable like actual humans are with those.Or I've
| been watching too much scifi lately.
| gnramires wrote:
| This is an interesting point. Motivation (and
| consciousness) is a complex topic, but for example we can
| see that drugs are essentially spurious (not 'desired' in
| a sense) motivators. They are a kind of reward given for
| no particular activity, that can become highly addictive
| (because in a way it seems we are programmed to seek
| rewards).
|
| Disclaimer: Somewhat speculative.
|
| I don't think aligning the motivation of an AGI, for
| example, with the tasks that are useful for us (and for
| them as well) is unethical. Humans basically have this as
| well -- we like working (to an extent, or at least we
| like being productive/useful), we seek things like food
| and sex (because they're important for our survival). It
| seems alright to make AIs like their work as well. I
| think depending on the AI, it also seems fair to give
| them a fair share of self-determination so they can not
| only serve our interests (ideally, the interest of all
| being) but safeguard their own wellbeing, as systems with
| varying amounts of consciousness. This is little touched
| upon (I guess Phillip K Dick was a pioneer in the
| wellbeing of non-humans with 'Do Androids Dream of
| Electric Sheep'?), even in fiction. The goal should be to
| ensure a good existence for everyone :)
| Keyframe wrote:
| This company will not reach AGI. Let's be real there for
| a moment. This company doesn't even have a decent shot at
| Google's lunch if Google comes to its senses soon, which
| it will.
| yellow_postit wrote:
| _startup has no shot once incumbent comes to their
| senses_ is a claim that I think HackerNews of all places
| would be cautious in believing too fully.
|
| Is it likely Google or others with large Research wings
| can compete with OpenAI? Very probably so, but I'm
| assigning a non trivial risk that the proverbial emperor
| has no clothes and incumbents like Google cannot
| effectively respond to OpenAI given the unique
| constraints of being a large conglomerate.
|
| Regardless, time will provide the answer it seems in a
| couple of months.
| fragmede wrote:
| Yeah, especially since there's a Stripe Amazon
| partnership piece on the front page right now, and Amazon
| Pay's right there.
| Keyframe wrote:
| You _do_ understand everything we've seen from OpenAI
| Google already showed us they have? Not to mention OG
| research and being the primary r&d force behind vast
| majority of AI you're seeing. They haven't put it in
| hands of users as directly yet though, reasons to be
| speculated upon.
| [deleted]
| Firmwarrior wrote:
| Sounds a lot like Xerox and GUIs, Microsoft and Web 2.0,
| Microsoft and smartphones, etc
| lostmsu wrote:
| I must say that both of your and parent's points are very
| enlightening.
|
| Yours in that from it follows, that there's still quite a
| bit of room to get ahead of OpenAI for smaller players.
|
| Parent's in that in order to achieve above one can just
| leverage the public papers produced by bigger research
| labs.
| ben_w wrote:
| Depends on the timescale.
|
| I have the feeling that smaller players are about as
| likely to get past GPT-n family in the next 2-3 years as
| I am to turn a Farnsworth Fusor into a useful power
| source.
|
| Major technical challenges that might be solvable by a
| lone wolf, in the former case to reduce the data/training
| requirements and in the latter to stop ions wastefully
| hitting a grid.
|
| But in 10 years the costs should be down about 99%, which
| turns the AI training costs from "major investment by
| mega corp or super-rich" into "lottery winner might buy
| one".
| robotresearcher wrote:
| This tech is capital-intensive even when you know how to
| do it.
| lostmsu wrote:
| I heard estimates in tens of $M. That's rather available.
| alexeichemenda wrote:
| 100x is a great return even for YC standards, but the
| best returns that business angels, VCs and YC have had is
| in the order of magnitude of 10000x (yes, ten thousand).
| So capping at 100x still makes it attractive for
| investors, yet leaves a lot of potential capital for the
| non-profit.
|
| As one example, Sequoia invested in Airbnb at $0.01 per
| share, and Airbnb's current stock price is $102, almost
| exactly 10000x return. This happens more often that you
| think if you're not in the early stage & top VC world.
| roflyear wrote:
| When you invest in a company with this structure you're
| not doing it to make money, you're doing it b/c you
| believe in the product, that's why those structures
| exist, from my understanding.
| whimsicalism wrote:
| if you believe in the product so much, just donate the
| money.
|
| what you are saying is not true.
| roflyear wrote:
| I can only speak to what I'm familiar with, in my
| experience this has been the case. In my experience these
| individuals do donate to charity, but OpenAI is not a
| charity.
| CBarkleyU wrote:
| > but OpenAI is not a charity.
|
| Something doesnt have to be a charity to be donated money
| towards. The question was: Why aren't they donating the
| money? The answer is: Because the want to make money.
|
| Just FYI: OpenAI tried going the non-profit route, it
| didnt work, because suprise, suprise, in the grand scheme
| of VC things nobody wants to donate 10 billion dollars to
| anything.
| tomp wrote:
| _> but the best returns that business angels, VCs and YC
| have had is in the order of magnitude of 10000x_
|
| Aren't those equity returns? i.e. when you sell (your
| shares of) the company to the public... the reason people
| still value the company is discounted _future_ returns.
|
| So if you want to generate such returns with _cash_ (i.e.
| profit) it can take quite a bit longer.
| diogopublio wrote:
| Maybe that is a bit too much
|
| $0.01 per mean share would mean 6.5M USD valuation
| (current mkt cap is 65Bn). Accounting for dilution in
| investment rounds, let's say 4 x 20% dilution, that is
| around 52% penalty in valuation. Roughly, their entering
| price would be around 3-4M USD valuation. I am not saying
| in any way that this is a low return also, I may be wrong
| on my calculation, please, be free to correct me! ; )
| JumpCrisscross wrote:
| > _dilution in investment rounds_
|
| Using share prices side steps dilution, which is a
| problem when one linearly scales valuation increases to
| wealth gains.
| WastingMyTime89 wrote:
| Yes, it's pure marketing and totally disingenuous. It's
| like being called OpenAI while nothing is open. It's
| interesting research done by terrible people.
| jefftk wrote:
| > being called OpenAI while nothing is open
|
| https://github.com/openai/whisper is open
| thethimble wrote:
| These "terrible people" seem to have catapulted the world
| into a generative AI world.
|
| They genuinely believe they will build AGI and therefore
| becoming the world's most valuable company is a natural
| consequence.
|
| Whether this is possible/probable is a different story,
| but I think a capped profit structure makes logical sense
| for the company that is aiming to create AGI. Would you
| want this technology instead in the hands of a for profit
| company?
| WastingMyTime89 wrote:
| It is a for profit company in everything but name. That's
| my main complain. It has Musk, Thiel and Livingston
| amongst its initial investors, all known as the greatest
| philantropists of our time. /s
|
| I don't understand why they put this thin veneer on top
| of what they are doing. Maybe Thiel was burnt with the
| bad press surrounding Palantir and this is preventive
| damage control.
| awestroke wrote:
| It's literally a nonprofit
| ivalm wrote:
| It is not literally a non-profit. As far as legally
| recognized organizations it is a limited partnership with
| a now minority ownership held by a non-profit.
| eternalban wrote:
| That means absolutely nothing to billionaire control
| freaks.
|
| If OpenAI's products become the next Googlie thing (and
| here I was worried about Cloudflare <smack head>) then
| these are the future _influencers_. This is society
| mainlined on TikTok levels of manipulation.
|
| Surely you have adapted to ChatGPT's requirements for
| interacting, have you not? There is a name for this:
| _social engineering_.
| adastra22 wrote:
| No, it is profit-capped. And even then only on the same
| sense that the US government is debt-limited.
| Der_Einzige wrote:
| Huggingface and maybe Stability catapulted us into that
| world. Not OpenAI
| ben_w wrote:
| Normal people[0] don't mention huggingface, they talk
| about Midjourney, Stable Diffusion, and ChatGPT by name,
| or the ideas generically.
|
| [0] Well, non-programmers at least: webcomic
| creators[1][2][3], news anchors[4], opinion piece
| columnist[5], and stand-up comedians[6]. Programmers also
| know about GitHub Copilot.
|
| [1] https://www.smbc-comics.com/comic/mountweazel
|
| [2]
| https://www.collectedcurios.com/sequentialart.php?s=1226
|
| [3] https://www.reddit.com/r/StableDiffusion/comments/10b
| j8jm/cl...
|
| [4] https://youtu.be/GYeJC31JcM0
|
| [5] https://mobile.twitter.com/CraigGrannell/status/16146
| 0352687...
|
| [6] Russell Howard, but I can't find the clip on youtube
___________________________________________________________________
(page generated 2023-01-23 23:01 UTC)