[HN Gopher] Nearly half of Nvidia's revenue comes from four myst...
___________________________________________________________________
Nearly half of Nvidia's revenue comes from four mystery whales each
buying $3B+
Author : mgh2
Score : 202 points
Date : 2024-08-31 17:42 UTC (5 hours ago)
(HTM) web link (fortune.com)
(TXT) w3m dump (fortune.com)
| jmclnx wrote:
| Going to be ugly when the AI bubble busts
| kergonath wrote:
| It feels suspiciously like it's 1999 all over again.
| olderthandang wrote:
| No it doesn't. The economy then was actually good.
| deepfriedchokes wrote:
| And housing was cheap and plentiful.
| kergonath wrote:
| Not really compared to 1974, though. And I am sure in 25
| years time we will be complaining about how good we had
| it in 2024.
| ThunderSizzle wrote:
| If 2024 is the high point for a quarter century, we
| really are going to rock bottom as a world.
| azinman2 wrote:
| Everything is relative.
| kibwen wrote:
| Don't think of this as the worst year of your life. Think
| of it as the best year of the rest of your life. :)
| mbreese wrote:
| More like the AI winter from the late 80s.
| GaggiX wrote:
| If by "AI winter" you mean a period where AI will continue
| to be used for semantic search, moderation, translation,
| captioning, TTS, STT, context-aware grammar checking, LLM,
| and audio/image classification, then yes, it would be an
| "AI winter" where AI is used everywhere.
| mbreese wrote:
| I meant specifically the time in the late 80s when
| investment in AI collapsed because it was overhyped and
| caused the downfall of Lisp Machines. The AI field itself
| kept moving forward, but investment and grant funding was
| cut to almost nothing for a long time. It took a long
| time for the field to get to where it is now, but the
| hype cycle has been going back and forth for decades in
| AI.
|
| https://en.m.wikipedia.org/wiki/AI_winter
| GaggiX wrote:
| I know well what the 80's AI winter is, the next one will
| have AI used everywhere if it's going to happen.
| hn_throwaway_99 wrote:
| I totally agree, but I feel like some comments are making the
| mistake of stating "this bubble will pop, which means it was
| all smoke and mirrors to begin with".
|
| The dot com bubble popped, but it's not like the Internet
| technologies that were launched then (and companies like
| Amazon and Google) weren't hugely impactful on all of society
| since then.
|
| I think the AI bubble will pop, and while I think there is a
| lot of nonsense hype about AI I still think AI's societal
| impact will only grow.
| philistine wrote:
| No one who is saying the bubble will pop thinks there is
| nothing behind it. That's the definition of a bubble: you
| always need soap and water to make it, but soap and water
| are commodities, not this special unicorn that will change
| the world.
| gitfan86 wrote:
| Beanie Babies, Tulips, NFTs, Web3 tokens were all
| obviously not going to change the world. The bubble was
| pure emotion and greed. All the cash inflows were
| speculation.
|
| Nvidia made 18 billion in profit last quarter, and
| expects to make 20 next quarter. That isn't speculation.
| stoperaticless wrote:
| I's going to be nice to get cheap gpu.
| Ekaros wrote:
| They might even put reasonable amount of RAM on reasonably
| priced models... Why can I not get 16GB on some 700EUR gaming
| cpu... I can get CPU+Mobo+32GB ram for around same... I just
| hope this intentional kneecapping ends so I can get something
| that can be used for a few years.
| HDThoreaun wrote:
| Margins on datacenter GPUs will probably always be better
| than consumer. As long as thats the case they need to
| segment to stop datacenters from using consumer products so
| I've got a feeling that you will never be able to buy a
| consumer nvidia product with a reasonable amount of RAM.
| Maybe intel will release one to get some hype for their gpu
| line?
| loa_in_ wrote:
| Does that mean that datacenter hardware might be a cost
| wise option for making a home lab PC soon?
| HDThoreaun wrote:
| Probably better to just throw your workload on the cloud
| unless you're using it close to 24/7.
| philistine wrote:
| A GPU is an accessory to the real product you're buying:
| a driver to interface with your software. Datacenter GPUs
| have drivers that are woefully inadequate for gaming.
| gopher_space wrote:
| I'm just thinking that maybe trading a decent used sedan
| for a slightly shinier ARPG isn't the wisest move I could
| be making.
| eropple wrote:
| If you can't use a "reasonably priced model" GPU for "a few
| years", I'm really confused as to what you're doing. I know
| people still using 1080's and 1080Ti's and playing pretty
| much anything they want to, and I only just upgraded from a
| 2070 Super to a 7800 XT (with 16GB of RAM on it, even) this
| summer.
| 0cf8612b2e1e wrote:
| I assume the consumer GPU and data center products have
| minimal overlap. If NVidia never sold another server product,
| would that really impact consumers all that much?
| keyringlight wrote:
| There isn't infinite production/packaging capability, and
| they're going to prioritize the customers willing to pay
| more for the chips they get out of a wafer. Another aspect
| is that the chips are different between compute and
| consumer, as opposed to something like a Zen chip where it
| can be used in either Epyc or Ryzen.
| Devasta wrote:
| I don't know, when bitcoin crashed there were a flood of
| clapped out wrecked GPUs on the market but nothing that I'd
| risk buying.
|
| Same'll happen here.
| joezydeco wrote:
| Is it possible to use an H100 as a gaming GPU? That would be
| neat to see.
|
| Oh. It's been tried: https://www.pcgamer.com/nvidias-ultra-
| expensive-h100-hopper-...
| porphyra wrote:
| I have no idea what I would do if cheap GH200s started
| showing up on Ebay. They would probably need some crazy
| cooling and interconnect to get working. I guess it would be
| the ultimate "localllama" machine.
| eli_gottlieb wrote:
| It's going to rock for gamers.
| marcosdumay wrote:
| I wouldn't bet in bulk computation not being in high demand for
| the foreseeable future.
|
| If the AI bubble bursts, people will use the available GPUs for
| something else.
| hn_throwaway_99 wrote:
| > If the AI bubble bursts, people will use the available GPUs
| for something else.
|
| Yes, of course, but that just means that this bubble would be
| basically identical to previous capital intensive bubbles.
| For example, there was a railroad bubble in the 1800s, and a
| massive telecom bubble in the late 90s. These bubbles popped,
| resulting in massive corporate bankruptcies and failed
| companies. But the infrastructure they built (miles and miles
| of railroad and dark fiber, which has since been lit up) laid
| the foundation for huge economic development shortly
| thereafter.
| philistine wrote:
| The railroad built during the bubble in the 1800s, like 90%
| of it is decommissioned. It served no sustainable economic
| benefit, as most of it was last mile railroad that quickly
| got consolidated into trunks.
|
| If the US had maintained and kept the rail it built, it
| wouldn't have the poor infrastructure it has right now.
| marcosdumay wrote:
| The relevant part is that no, the steel industry didn't
| break when the railroad bubble burst.
|
| Nvidia is not the train company on that scenario.
| bogwog wrote:
| I hope the bankruptcies start soon so I can buy me some
| H200s for cheap on eBay
| 2OEH8eoCRo0 wrote:
| It's a shame all this compute is being built and none will
| trickle down. It would be fun to hack on this stuff as a
| hobbyist once it's sold for peanuts.
| wmf wrote:
| I assume used V100/P100s are on eBay. Go buy them and report
| back.
| Zamicol wrote:
| I'm confused by this sentiment I've seen repeated by some.
|
| AI/LLMs are radically expanding my abilities, and as I adapt to
| this new power, I'm using it more frequently in everyday life.
|
| Sure, Nvidia stock may be overpriced, but AI is empowering. I
| can't imagine not continuing to expand its use. As its
| abilities expand, I'll use it even more. I will have much
| further use even as a few bugs are fixed and integrations
| become more frictionless.
| tail_exchange wrote:
| Probably because not everybody is feeing this productivity
| boost. AI made me a bit more productive, yes, but not by that
| much. Seeing you call it a "new power" is not relatable, so
| it may reinforce ideas that it is a bubble.
| VirusNewbie wrote:
| Right but I feel similarly about excel/sheets and
| powerpoint. But they make almost every office worker a bit
| more productive, so it's a good market.
| tail_exchange wrote:
| I think these can both be true. If AI makes people be 3%
| more productive overall, cumulatively that's a huge
| improvement, but on an individual level it may feel
| undeserving of hype.
| m463 wrote:
| That seems like saying "web advertising never shows me what
| I want to buy"
|
| Maybe it is not for you. Maybe it is for people asking AI
| questions about you. (or chemistry or gold prospecting or
| legal documents or ...)
|
| the eric schmidt talk made it seem like better hardware led
| to better results and there was a race.
| pastaguy1 wrote:
| I suppose people might be wondering what happens when this
| small number of big players finish their buildouts.
| nerdponx wrote:
| This. AI itself might be here to stay, but how much
| _revenue growth_ specifically is left?
| gitfan86 wrote:
| People got burned by crypto, they promised that it would
| replace Fiat money but all that happened is that they lost
| all their money investing in NFTs or Web3. So now they are
| jaded against any new hyped technology, and have no interest
| in investing, and are actively hoping it fails for FOMO
| reasons
| oceanplexian wrote:
| You can't throw a stone very far without running into an
| IRL businesses or ATM that takes crypto (In my small town
| there are many), congress is writing laws to legalize it, a
| presidential candidate is running on it, and the Fed is
| creating a "coin".
|
| When businesses stop accepting dollars and your employer
| starts compensating you in crypto will it stop being a
| "scam" or will the goalposts move again?
| AgentOrange1234 wrote:
| Curious what year do you imagine that will be?
|
| (I don't expect it to see it in my lifetime.)
| delfinom wrote:
| Eh, anything AI is clearly in a massive bubble.
| IshKebab wrote:
| How is it going to burst exactly? Are ChatGPT and Google
| Assistant going to suddenly stop working?
| cdelsolar wrote:
| There's no such thing as an AI bubble. That's like saying,
| going to be ugly when the car bubble bursts, back in the early
| 1900s.
| saurik wrote:
| So, I feel like your arguments is "AI is useful, like cars,
| so there won't be a bubble"; but like, I think we must all
| agree that the Internet is useful, and yet there certainly
| was the ".com bubble". We've occasionally had real estate
| bubbles, and I do in fact believe there was a car bubble in
| the early 1900s during the 20s?
| m463 wrote:
| is it a bubble? or is the singularity? (only half joking)
| worstspotgain wrote:
| https://archive.is/zHMO5
| worstspotgain wrote:
| > Although the names of the mystery AI whales are not known, they
| are likely to include Amazon, Meta, Microsoft, Alphabet, OpenAI,
| or Tesla.
|
| I hope they're not saving their best chips for the likes of
| Tesla/Grok. That'd be a PR nightmare if and when it leaks.
| transcriptase wrote:
| Yeah, can you imagine a business selling to the bad space man.
| Don't they know they're supposed to hate him?
| kergonath wrote:
| Well, he _is_ known for stiffing his contractors and
| creditors.
| IncreasePosts wrote:
| No, he isn't.
| albumen wrote:
| Did he end up paying all these guys?
|
| https://arstechnica.com/tech-policy/2023/09/musks-unpaid-
| bil...
| IncreasePosts wrote:
| Whether he did or not is not the same as saying he is
| "known for it".
|
| If you asked 1000 random people to say what they know
| about Elon musk, what percent do you think will say "oh.
| You mean the guy that doesn't pay vendors!"
|
| Also. What is the base rate for contract disputes with
| vendors among large companies? He runs 3 large companies,
| surely with tens of thousands of contracts for services.
| There will always be disputes there - is his rate higher
| than average? Does he lose very dispute in court?
| asadotzler wrote:
| There are plenty of articles calling his companies out
| specifically and few calling other similar companies out
| so you can take your what ifs somewhere else.
| IncreasePosts wrote:
| Maybe because anything "Elon musk", including what he
| randomly tweets on the toilet, is news, but a random
| contract dispute with GE and a vendor is not news.
| slater wrote:
| He is.
| asadotzler wrote:
| He is. Read the web (and the room) friend.
| BadHumans wrote:
| Not it would not. Nvidia has never been the good guy in the
| eyes of the public and most people buy Nvidia because they are
| better than the competition. Getting in bed with Elon would
| just be seen as a capitalist company doing capitalist things.
| mrweasel wrote:
| Yeah, I don't really see what you'd buy in place of Nvidia?
| Either you're huge and have the funds to do your own chips,
| or you're stuck buying Nvidia, or maybe you do both.
| marginalia_nu wrote:
| Given their near monopoly on the products they're selling,
| anti-competitive behavior like that would actually be pretty
| big liability.
| rsynnott wrote:
| Okay, I mean I feel like they're not that mysterious. Like, there
| are probably only five or six candidates.
| kergonath wrote:
| And regardless of who the 4 are, the two others in that list of
| six candidates are most likely not too far behind.
| gpm wrote:
| The most interesting thing to discover would be if one of
| them _is_ that far behind, because they 're succeeding on
| their own/someone not-Nvidia's silicon.
|
| The public in-house projects that I'm aware of (but as far as
| I know haven't fully replaced demand for Nvidia GPUs)
| include:
|
| - Google's "TPU" (in production, publicly rentable)
|
| - Amazon/AWS's "Trainium" (in production, publicly rentable)
|
| - Meta's "MTIA" (in production)
|
| - Microsoft's "Maia 100" (I'm unclear on their status)
|
| - Tesla's "D1" (I'm unclear on their status)
| HarHarVeryFunny wrote:
| xAI 100K H100 cluster would be ~$2B
|
| Is this trailing year NVIDIA sales, or order book ?
| jsnell wrote:
| The numbers in the article? They're neither. They're the
| revenue for just Q2, not the full trailing year.
| jsheard wrote:
| Are these distinct architectures, or is it an ARM situation
| where nearly everyone is gluing the same IP cores together
| in slightly different configurations?
| taktoa wrote:
| Definitely not an ARM situation.
| zaphar wrote:
| I'm not sure about all of them but Google's TPU is custom
| to them and not shared architecture.
| pclmulqdq wrote:
| They are distinct architectures, but mostly do the same
| thing. Pretty much all of them have a few small control
| cores that run matrix multiply and vector reduction
| units. The instruction set on all of them is different,
| but the broad strokes of the architecture are the same.
| rajman187 wrote:
| MTIA will be for inference initially. Another to add to the
| list is wafer maker Cerebras
|
| https://www.forbes.com/sites/craigsmith/2024/08/27/cerebras
| -...
| solidasparagus wrote:
| It's not just the hardware, it's the software stack too and
| my understanding is that they aren't very good. Even TPUs
| aren't great if you aren't either (1) doing something
| extremely standard and a little bit old (e.g. not forefront
| of research and the stack has already been optimized for
| your model) or (2) in Google with access to the people who
| build the stack.
|
| Maybe it is working for Meta or Tesla where things can be
| vertically integrated, but for the public clouds, they have
| to buy NVIDIA for their customers.
| bradleyjg wrote:
| Does TSM have the capacity to scale up anything new right
| now?
| ninetyninenine wrote:
| Let's name them, and why?
| jmathai wrote:
| Anyone building the largest of LLMs including Alphabet, Meta,
| OpenAI, Anthropic
| spwa4 wrote:
| I think you should probably add all large cloud providers.
| Amazon and Microsoft should be in the list.
| azinman2 wrote:
| And we know OpenAI uses Microsoft for GPUs. My guess is
| Anthropic is similarly not owning their own data centers;
| didn't they get a bunch of money from google? It's
| probably being spent there.
| alephnerd wrote:
| They use AWS.
| VirusNewbie wrote:
| No they don't.
| qwertox wrote:
| I doubt Anthropic builds their own GPU datacenter.
|
| They might buy some, but I think that Google, Meta,
| Microsoft and Amazon and will be the ones buying in large
| batches to enable companies like Anthropic (and themselves)
| to scale up to world wide inferencing demands, as well as
| generally offering the most efficient GPUs to their
| customers.
| jmathai wrote:
| Think they're renting GPUs from the cloud providers?
|
| Very plausible. I'm not sure at which point it makes
| economic sense to buy the GPUs and build out the
| infrastructure to continually be training something like
| Claude.
| alephnerd wrote:
| > Think they're renting GPUs from the cloud providers
|
| It's a major reason why they raised with Amazon [0]
|
| There are actually a LOT of other large companies that
| participated in Anthropic's round but haven't announced
| it publicly.
|
| [0] - https://www.aboutamazon.com/news/company-
| news/amazon-anthrop...
| VirusNewbie wrote:
| They train on GCP now.
| bigyikes wrote:
| They also raised with Google, interestingly.
|
| https://www.wsj.com/tech/ai/google-commits-2-billion-in-
| fund...
| danjl wrote:
| Oh, I would not forget governments. The NSA basically paid
| for Kepler with a single purchase...
| marcosdumay wrote:
| I believe the GP refers to Google, Facebook, Amazon and
| Microsoft.
|
| And while yeah, that's probably them, I'd put a non-trivial
| chance of some government intelligence organization to make
| it into the top 3.
| ghshephard wrote:
| I'm guessing a massive amount is for inference for
| whatsapp, and the original goal was making for relevant
| instagram - and of course the massive Llama model training
| - my guess is Facebook is a relatively small component of
| Meta's overall use of GPUs. Feed recommendations? (unless
| you were using facebook as a holder for Meta?)
| vineyardmike wrote:
| It's absolutely not WhatsApp. It's their recommendation
| engines. They've publicly stated they're buying enough
| GPUs to have the spare capacity to train another "reels"
| sized product for when the opportunity emerges.
|
| (They absolutely use it as a holder for Meta)
| miki123211 wrote:
| ANd possibly some less-known companies, as fronts for the
| Chinese.
| alephnerd wrote:
| > less-known companies, as fronts for the Chinese.
|
| Not at that size. That is VERY on the nose sanctions
| evasion.
|
| Such sanctions evasions tend to use multiple smaller
| parties doing purchases and then reselling.
| alephnerd wrote:
| > I'd put a non-trivial chance of some government
| intelligence organization to make it into the top 3
|
| Most likely DoE. TLAs purchase indirectly (or use other
| federal agencies in the DoD as a front)
| 7thpower wrote:
| Companies like Coreweave who lease accelerators may make this
| analysis less straight forward than it appears.
| jeffbee wrote:
| Even less straightforward since Nvidia funds Coreweave.
| nosefurhairdo wrote:
| > Although the names of the mystery AI whales are not known,
| they are likely to include Amazon, Meta, Microsoft, Alphabet,
| OpenAI, or Tesla.
| terafo wrote:
| Why mention Microsoft twice?
| mensetmanusman wrote:
| The departments aren't talking, so they accidentally made
| two orders.
| fnordpiglet wrote:
| I'd note Microsoft needs OpenAI a hell of a lot more than
| OpenAI needs Microsoft. I'd actually pivot that to be why
| mention OpenAI twice.
| FridgeSeal wrote:
| OpenAI doesn't operate without the enormous amounts of
| funding MS gives it.
| adwi wrote:
| I think a lot of institutions and people would love the
| chance to give them money.
| noirbot wrote:
| But how many of them have hot data centers to offer?
| Google is a direct competitor, so Oracle or Amazon are
| kinda the only other two big options to offer them what
| MS is right now.
|
| If MS drops OpenAI, it's not like they can just
| seamlessly pivot to running their own data centers with
| no downtime, even with pretty high investment.
| amluto wrote:
| How so? As far as I can tell, Microsoft has a large
| equity interest in OpenAI, and OpenAI has a lot of cloud
| credits usable on Microsoft's cloud. I don't think those
| credits are transferable to other providers.
| jmathai wrote:
| It will be interesting to see how AI opportunities evolve and if
| open source models will play the same role as the public
| infrastructure of the dotcom boom did.
|
| Or if closed models will dominate. For example, by the largest
| companies leveraging their existing distribution channels and/or
| acquiring promising startups.
| kuon wrote:
| I have a few of my customers using AI and they are asking me to
| build self owned AI server running open source models. With
| about $20k you can have your own little AI beast and do a lot
| with it.
|
| They do this because proprietary AI models are not flexible
| enough and are lacking a lot of API.
|
| For example, one app I wrote was to analyze scans of old maps
| and use generative AI to extrapolate and create animations.
|
| I don't know where the market will go. But my feeling is that
| large proprietary models are very good at a very limited type
| of work and that open source will provide diversity.
| djaouen wrote:
| The consolidation of progress into the hands of a few should be
| fought against at (almost any) cost. Run Linux!!
| ant6n wrote:
| But the how will I run MS Teams, office365 and OneDrive?
|
| (I'm only partially kidding, sigh...)
| dgfitz wrote:
| If you're serious, they all have web apps. I use them on my
| linux box all the time. Ripped the certs off my corp. laptop.
| tomrod wrote:
| Teams: several options
|
| Office 365: several options
|
| OneDrive: several options
|
| Check out https://github.com/awesome-selfhosted/awesome-
| selfhosted
| matrix2003 wrote:
| 95% of corporate environments: 1-2 options. Windows or
| macOS for the chosen few.
|
| At least the overloads I have worked for don't allow us
| access unless the machine is locked down so hard that it's
| borderline unusable.
| matrix2003 wrote:
| I'm past the edit time, but "unless" should be "and"
| softwaredoug wrote:
| Those whales - probably major cloud vendors - likely have the
| resources to develop their own hardware at some point.
|
| Right now it's "buy GPUs at any cost". If things slow, there will
| be a chance for these customers to consider how to optimize this
| cost. NVIDIA can't sit on its laurels like Intel did with x86.
| HDThoreaun wrote:
| Google did this a decade ago. Amazon has a CPU but I think no
| GPU yet although Im sure its being worked on. The problem for
| them is the CUDA moat. Their hardware is mostly used for
| inference because no one trains on non-nvidia hardware.
| JoshTriplett wrote:
| > The problem for them is the CUDA moat.
|
| Compared to the problem of developing a new cutting-edge GPU,
| building a CUDA compatibility layer is a much smaller
| problem. Hire the author of ZLUDA, throw a small team at it,
| and have a legal department on standby. And separately,
| there'd also be value in some source-translation projects to
| help people migrate to some better native framework.
| vidarh wrote:
| Both in terms of hardware and software, they need to be
| able to support the small set of operations that their
| training or inference uses, so the problem on both sides is
| much smaller than both a full GPU and a full CUDA
| replacement.
| codedokode wrote:
| Why support proprietary libraries? Isn't it better to make
| an open-source library.
|
| Also as for AMD, as I understand, they are unwilling to
| make ML libraries for consumer-grade GPUs and GPUs built
| into CPUs.
| JoshTriplett wrote:
| An Open Source compatibility layer for CUDA to allow
| people to run on non-NVIDIA GPUs is a first step; it
| removes the lock-in between GPU and library. Once people
| can run all their existing software on a different GPU,
| they can then consider adopting a better standard to
| build on top of.
|
| The reverse approach, of trying to entice people to move
| from CUDA to a different library and switch GPUs at the
| same time, has been tried repeatedly and has not yet
| succeeded. Trying something different seems warranted.
| Ekaros wrote:
| I don't really understand how CUDA is proper moat... You can
| scale software engineers much more readily than hardware
| supply chain. And basically for your own models, it should
| not be impossible to train your staff to use your layer
| instead.
| HDThoreaun wrote:
| Im inclined to agree, I dont think it's a sustainable moat.
| But AMD and intel have been trying to break through and
| have had minimal success so far. Until someone actually
| releases a CUDA competitor that is used the moat exists.
| Ekaros wrote:
| Critical thing here in my mind it is not so much general
| moat, but lot of individual moats. Each of these
| companies investing billions can build their own. Easily.
|
| So most probable end result is that we end up with
| multiple competing alternatives all with their own vendor
| lock ins. And general public might be lucky to get one or
| two options.
| xnyan wrote:
| >it should not be impossible to train your staff to use
| your layer instead.
|
| First, it's the classic chicken and egg problem. Why would
| you invest in a CUDA alternative when you're going to be
| using nvidia hardware anyway?
|
| Second, something can be not impossible but still quite
| difficult. As AMD and Intel have shown, creating a GPGPU
| API for your hardware that people want to use is not a
| trivial task and to date have not managed to do it.
|
| Lastly this must just be differences in our experiences
| with cooperate management, because mine has been that in
| general they would always prefer to spend on stuff over
| headcount if said stuff reduces the headcount required.
| tester756 wrote:
| >You can scale software engineers much more
|
| People do not scale.
| gpm wrote:
| Amazon/AWS has Trainium: https://aws.amazon.com/machine-
| learning/trainium/
| yyyfb wrote:
| What if having the resources to develop hardware are not the
| point? This is a physical business, and supply chain is the
| bottleneck at some point. Right now it seems that all the money
| in the world can't build fabs fast enough to manufacture
| alternatives to Nvidia's chips. As long as they maintain
| dominance over the supply chain, having developed equivalent
| technology might not matter. Someone correct me if this is
| wrong, I'm mostly speculating.
| I_AM_A_SMURF wrote:
| I mean it's all TMSC at the end of the day. They have finite
| capacity that's all tied up for a while.
| Ekaros wrote:
| The future order book is the interesting part. Does Nvidia
| has as big share as they do now or has someone else paid
| more that is outbid them.
| stevebmark wrote:
| Pretty good indicator of the nonsense hype bubble of AI!
| dboreham wrote:
| I'm AI-positive (now), but yes this sounds like a chip bubble.
| NVIDIA seem to be good at chasing these bubbles -- first crypto
| mining, now AI. It wouldn't surprise me to find one of the
| major buyers is a speculator (hedge fund led by crypto bros,
| for example).
| findthewords wrote:
| I've seen gluts not followed by shortages, but I've never
| seen a shortage not followed by a glut.
|
| - Nassim Nicholas Taleb, Twitter, 2021-09-11
|
| https://x.com/nntaleb/status/1436776641536090117
| cdchn wrote:
| I'm crypto bearish and AI-neutral but it seems less to me
| like NVIDIA chasing bubbles and more like new and interesting
| applications for the type of compute that NVIDIA offers keep
| emerging.
| qwertox wrote:
| It's not like AI hasn't been delivering during these past 3
| years, and it's just getting started.
|
| There's no one stealing market share from Nvidia at the moment.
| Groq and Tenstorrent are extremely promising, but both are
| still private companies. Once Groq goes public, Nvidia will
| tank a bit for a while while all the "experts" announce the end
| of Nvidia. I wouldn't be surprised if then Nvidia would then
| also sell specialized AI accelerators, if they find that
| segment attractive enough due to losses in general GPU demand
| created by those companies.
| philistine wrote:
| What has it been delivering? Who's making money from this
| stuff?
|
| To quote Steve Jobs when talking to Dropbox: you guys don't
| have a product, you have a feature.
| CamperBob2 wrote:
| (Shrug) Jobs is dead, and Dropbox has an $8B market cap.
| Lammy wrote:
| NSA are surely one of them.
| dgfitz wrote:
| The amount of ignorance surrounding that agency on this forums
| is truly astounding.
| Lammy wrote:
| Hoid it through the grapevine
| mepian wrote:
| They were one of the first HDTV customers, along with NGIA and
| NRO.
| jajko wrote:
| Don't they outsource work to places like Palantir? Although I
| can easily imagine bosses of these 3 letter agencies scrambling
| over each other in another glorious fit of FOMO in their
| internal race of 'who can model every single human on earth
| better'
| alephnerd wrote:
| TLAs procure indirectly for obfuscation and regulatory reasons.
| They wouldn't directly make purchases at this size.
| deepsquirrelnet wrote:
| I think Meta has been pretty transparent about their GPU
| purchases[1]. 350k H100s should go pretty far into the billions.
|
| https://blogs.nvidia.com/blog/meta-llama3-inference-accelera...
| RiverCrochet wrote:
| Let's play "Guess the whales!"
|
| N _ _
|
| _ B _
|
| _ _ A
|
| _ H _
| asah wrote:
| low quality discussion, lots of ignorant speculation... I wonder
| if there's a way to analyze HN discussions to measure "quality" ?
| jmilloy wrote:
| One indicator that I find reliable is simply if the comments
| exceed the up votes.
| mepian wrote:
| Apparently HN downranks posts using the same indicator.
| 2OEH8eoCRo0 wrote:
| Would it be legal for Jensen to be one of them?
| yieldcrv wrote:
| Fun note, you can structure options and "RSUs" with allocations
| of products, RSU in quotes because the S stands for stock and
| you wont be giving shares
|
| One benefit of non-securities underlying assets is that you can
| play with their pricing a lot more. like, you can have your
| friends vesting on some shoes you control the issuance of - or
| GPUs in this case - at a 99% discount and there's no reporting
| or regulation to a government over this. Big problem to do that
| with shares.
| wmf wrote:
| There's zero evidence of this. He's cashing out $1B per year
| but round-tripping that money back into Nvidia would just cause
| problems for him.
| potatoman22 wrote:
| I'm curious, why would he be one of them?
| brcmthrowaway wrote:
| It'd be crazy of one of them is RenTech
| linotype wrote:
| I'm kind of shocked that we're not seeing any new consumer GPU
| products from them. It's like they're content to just give that
| market away to AMD.
| jsheard wrote:
| Rumor is that AMDs RDNA4 will only span the low-to-mid range
| with no new flagship until RDNA5, so if anything they are the
| ones ceding the (high end) market to Nvidia.
| wmf wrote:
| Nvidia is on a two-year schedule and the 5090 should launch
| later this year.
| linotype wrote:
| What about more affordable cards? Early 2025?
| danjl wrote:
| Standard rollout process for the rest of the year as like
| the last generations
| ruune wrote:
| Looking at the release dates from RTX 4000 [0] it's likely,
| yes
|
| [0] https://en.wikipedia.org/wiki/GeForce_40_series
| wmf wrote:
| Honestly, I predict none of the cards will be affordable.
| xboxnolifes wrote:
| Define affordable. What minimum specs and what price point?
| nabla9 wrote:
| Nvidia's grasp of desktop GPU market balloons to 88% -- AMD has
| just 12%, Intel negligible, says JPR
| https://www.tomshardware.com/pc-components/gpus/nvidias-gras...
| mosquitobiten wrote:
| Shocked? It's pretty clear they dominate the market, they set
| the price/perf and release windows. AMD gave up being a
| disruptor and just follows them.
| greenthrow wrote:
| Very sustainable, very cool. Awesome market economy we have guys.
| andsoitis wrote:
| According to Observer they are: Microsoft, Meta, Google, and
| Amazon.
|
| Other big buyers area: Oracle, CoreWeave, Lambda, Tencent, Baidu,
| Alibaba, ByteDance, Tesla, xAI.
|
| https://observer.com/2024/06/nvidia-largest-ai-chip-customer...
| purplerabbit wrote:
| "Who could it be... Hmm... Such a tough nut to crack..."
|
| (Even without a report on this it would be obvious)
| paulddraper wrote:
| Utterly surprising
| darth_avocado wrote:
| Meta can be confirmed as one since they've literally mentioned
| their infra investments and Billions in capex increases until
| the end of 2025 in every earnings call this year.
| bhawks wrote:
| I guess Apple is using their custom silicon?
|
| That was a major payoff for Apple - I wonder if any of the
| other fangs will actually be able to follow suit.
| xuancanh wrote:
| Apple uses TPUs on Google Cloud Platform.
| https://www.cnbc.com/2024/07/29/apple-says-its-ai-models-
| wer...
| ceejayoz wrote:
| And a weird deal with OpenAI (which I think would show up
| as Microsoft for the actual physical hardware).
| https://openai.com/index/openai-and-apple-announce-
| partnersh...
| SSLy wrote:
| for training, but for the interference apparently they use
| their own chips
| bigyikes wrote:
| Is there evidence that Apple is training a model large enough
| to require a huge amount of compute?
| kklisura wrote:
| Where is Apple in all of this?
| seaal wrote:
| Apple historically dislikes NVIDIA and I they would likely
| rather use their own in-house chip team. They also rely on it
| by virtue of using OpenAI in upcoming iOS release.
| zigzag312 wrote:
| They don't like the high margins? :P
| m463 wrote:
| I wonder if the split happened with jobs or after jobs? I
| thought jobs was good at relationships with everyone else
| in silicon valley (intel, ati, nvidia, even microsoft)
| bayindirh wrote:
| IIRC it was with Jobs. Apple wanted to develop their own
| drivers for their chips from ground up, and NVIDIA was
| very secretive of their tech, so things went south.
| delfinom wrote:
| Apple dropped Nvidia after a few years of Nvidia
| falsifying thermal specifications on GPU chips.
|
| It drove apple crazy both with high failure rate of
| MacBooks where the GPU was desoldering itself and general
| problem of a hot as fuck bottom. Nvidia refused to pay
| out for damages to Apple as well from what I recall.
| jayd16 wrote:
| They're shipping queries off to ChatGPT so I guess this ends
| up as nVidia cards on Azure?
| thereisnospork wrote:
| Kind of embarrassing for Google to be on that list, no?
| Shouldn't their in-house TPUs be cost-advantageous for them?
| bayindirh wrote:
| No, because GPUs are not only for AI. They are MATMUL
| machines, and MATMUL is useful way beyond AI and tensor
| applications.
|
| Some of us use them at double precision mode.
| nomad_horse wrote:
| Those are likely for Cloud, used by clients.
| bluecalm wrote:
| From what I remember public companies have to disclose any
| customer responsible for more than 10%+ of their revenue on their
| 10-K so those won't be "mystery whales" for long.
| ballenf wrote:
| Surely an intelligence agency or two would be big buyers. They
| could lease but that might have security implications they're not
| comfortable with.
| bfung wrote:
| Clickbait title. Even the article cites the number from Jensen's
| interview.
|
| https://youtu.be/NC5NZPrxbHk?si=8uQ4zdMU02f4X1Hc (at 1:41)
|
| Hyperscalers & Meta.
|
| (Corp speak 101: Hyperscalers = AWS, GCP, Azure)
| uptownfunk wrote:
| What happens when the new models come out and the data centers
| are full of old models to be decommissioned. Would love to buy a
| huge amount of h200 once they have become "obsolete"
| Havoc wrote:
| Yeah planning to get out of NVIDIA shares after Blackwell.
|
| There is also an alarming (as shareholder) rise in custom
| silicon. Groq sambanova cerebus etc.
| gradus_ad wrote:
| Two questions that need answering:
|
| 1. Are chatbots going to get much more effective than they
| already are? It seems like all the major players are plateauing
| and the different models are becoming commoditized. That doesn't
| bode well for sustainable GPU sales. Also if the hallucination
| problem can't be solved, it's not clear that this generation of
| AI will ever be deployable at scale.
|
| 2. Are there genuine at scale use cases for AI outside of LLM's?
| Autonomous navigation seems like a major one, but I'm not sure
| how close that is to production ready. I know drug discovery and
| other applications are talked about, but not sure how much GPU
| consumption they can realistically generate. As we leave the
| novelty phase of the adoption curve, it's clear that a lot of the
| use of the image generators was unsustainable experimentation. My
| personal experience has been, a year ago my friends were creating
| tons of images but now we hardly do at all.
| aucisson_masque wrote:
| > Nvidia's net income margins are staggeringly high, with $5.60
| out of every $10 of revenue
|
| Competition when ? Are amd, Intel or other companies in the
| situation to be able to eat some of nvidia insane margin ?
| bob1029 wrote:
| Going after this exact market is potentially a fool's errand.
|
| AMD and Intel would probably be better off researching entirely
| different approaches that they can leverage their existing
| expertise for - i.e., some architecture that relies heavily on
| efficient OoO processing pipelines and free (if predicted
| correctly) control flow changes. Techniques that are
| antagonistic to GPU processing could represent a competitive
| moat.
|
| Joining an existing rabbit chase right in the middle can
| quickly evolve into a catastrophic strategic choice when the
| cost of entry is billions of dollars.
___________________________________________________________________
(page generated 2024-08-31 23:00 UTC)