[HN Gopher] AI Is Catapulting Nvidia Toward the $1 Trillion Club
___________________________________________________________________
AI Is Catapulting Nvidia Toward the $1 Trillion Club
Author : impish9208
Score : 165 points
Date : 2023-05-26 12:16 UTC (10 hours ago)
(HTM) web link (www.wsj.com)
(TXT) w3m dump (www.wsj.com)
| EddieEngineers wrote:
| What are the companies supplying the components/materials to
| Nvidia and how are their stocks currently performing?
| Ologn wrote:
| Not a direct answer, but Nvidia manufactures at TSMC. Yesterday
| morning Nvidia's stock jumped from its Wednesday evening
| earnings announcement, and TSM jumped from $90.14 to over $100
| yesterday morning.
| peppermint_gum wrote:
| CUDA is a success because 1) it works on all NVIDIA GPUs made
| since 2006 2) it works on both Windows and Linux.
|
| This may seem like a very low bar to clear, but AMD continues to
| struggle with it. I don't understand it. They act as if GPU
| compute was a fad not worth investing in.
| throwaway4220 wrote:
| Seriously, knowing very little about such low level stuff, why
| is this taking so long? George Hotz is starting a company on
| this premise
| htrp wrote:
| Yet another example of how long term platform investment pays
| dividends.
|
| And yet another reminder how far behind opencl/AMD is
| 11thEarlOfMar wrote:
| I'm getting very strong 1998 .COM vibes from AI.
|
| Replace Internet with AI in the following quote from the New York
| Times, November 11, 1996[0]:
|
| "For many people, AI could replace the functions of a broker,
| whose stock in trade is information, advice and execution of
| transactions, all of which may be cheap and easy to find on line.
| AI also is prompting some business executives to wonder whether
| they really need a high-priced Wall Street investment bank to
| find backers for their companies when they may now be able to
| reach so many potential investors directly over the Net. And
| ultimately, AI's ability to bring buyers and sellers together
| directly may change the very nature of American financial
| markets."
|
| It's a cautionary tale. Obviously, the Internet did live up to
| the hype. Just after it wiped out millions of retail investors...
|
| [0]https://www.nytimes.com/1996/11/11/business/slow-
| transition-...
| caeril wrote:
| As someone who has persistently laughed off the "it's different
| this time" idiocy from "revolutionary" technology, and as
| someone who has called 10 out of the last 4 bubbles, I would
| like to say that it really is different this time.
|
| We're on the precipice of obviating 80% of white collar work,
| and 99% of Graeber's Bullshit Jobs.
| pixl97 wrote:
| Massively decreasing pay, while not solving the asset
| inflation problem... this ain't going to go well at all.
| rayval wrote:
| Economic dislocation will lead to rise of angry/insane
| populists/nationalists (like Trump 2.0) in multiple
| regions. Already a trend, will get worse. One unfortunate
| but plausible outcome is catastrophic global conflict.
|
| To avoid this, countries need to plan for and mitigate the
| social effects of economic dislocation, such as UBI.
| Unfortunately that ain't gonna happen. Brace yourselves.
| Nesco wrote:
| Could you please detail why do you think Machine Learning
| will obviate jobs that are already useless?
| tenpies wrote:
| I agree with you, especially on un-regulated white-collar
| work (e.g. no one with magic letters after their name is in
| danger just yet).
|
| But give it a few years and I'm really curious how regulatory
| and licensing bodies react because they have almost always
| moved uniformly in whichever direction is necessary to
| suppress wages. There are few exceptions to this (e.g.
| physicians). The output benefits of worker + AI could
| potentially lead to some professional services becoming dirt
| cheap, while others become ludicrously expensive.
|
| I'm also curious what this means for immigration. For the
| West, the primary justification to siphon the world's talent
| fundamentally vanishes. That's talent that potentially stays
| put and develops in non-Western countries. For countries
| where the entire country is a demographic ponzi using
| immigrants to prevent collapse, it's potentially an
| existential problem.
| Macuyiko wrote:
| You're right, but I don't think so. From the moment that
| 80/99% realizes they're out of work, it's over. That's why
| you see idiot anti-AI spokespeople showing up, why Altman is
| invited to Bilderberg, why EU is making AI-laws. They're not
| against AI, as such, but please do not "awaken" the working
| class. Keep it for "trusted parties" or the military only.
| What I am curious about is how NVIDIA will position itself
| against that background.
|
| Personally, I did really wish this would have been a new-era
| moment where society would take a step back and evaluate how
| we are organizing ourselves (and living, even), but I fear
| that AI comes too late for us, in the sense that we're so
| rusted and backwards now that we cannot accept it. Or any
| important change, in fact. It's pretty depressing.
| 01100011 wrote:
| > ...2 years ago we were selling at 10 times revenues when we
| were at $64. At 10 times revenues, to give you a 10-year
| payback, I have to pay you 100% of revenues for 10 straight
| years in dividends. That assumes I can get that by my
| shareholders. That assumes I have zero cost of goods sold,
| which is very hard for a computer company. That assumes zero
| expenses, which is really hard with 39,000 employees. That
| assumes I pay no taxes, which is very hard. And that assumes
| you pay no taxes on your dividends, which is kind of illegal.
| And that assumes with zero R&D for the next 10 years, I can
| maintain the current revenue run rate. Now, having done that,
| would any of you like to buy my stock at $64? Do you realize
| how ridiculous those basic assumptions are? You don't need any
| transparency. You don't need any footnotes. What were you
| thinking?
|
| Sun Microsystems CEO Scott McNealy in 2002 (source
| https://smeadcap.com/missives/the-mcnealy-
| problem/#:~:text=A....)
| belter wrote:
| https://www.reddit.com/r/wallstreetbets/comments/13qk0vw/nvi...
| codethief wrote:
| Whether are not AI will follow the same destiny as the dot com
| bubble, doesn't really matter: In contrast to fancy AI
| startups, Nvidia is already making money (in fact it is highly
| profitable). They are basically adhering to the principle
| "During a goldrush, sell shovels."
| shrimpx wrote:
| I don't think this is just another bubble about to burst. I
| mean, the bubble bears have been talking about the imminently
| bursting bubble since 2016. The past couple years are what that
| burst bubble looks like. Hype-driven companies going out of
| business, disappearing unicorns, pullback on VC, massive
| layoffs, bank implosions, tons of tech stocks pulled back by
| 70-90%, consequences on the likes of Theranos, SBF, etc.
|
| The current AI wave is 95% hype (ultimately useless/broken crap
| invoking LLM APIs or AI art app du jour) but some of the
| companies are clearly useful (transcription, summarization,
| categorization, code generation, next-gen search engine, etc.)
| and will disrupt traditional services and scale large.
|
| And AI infra companies (AI hardware, AI software on top of
| hardware, and generic AI model SaaS) will make tons of money as
| those app companies scale.
| 11thEarlOfMar wrote:
| You are correct that the overall economic backdrop is quite
| different from the late 90s.
|
| Nonetheless, the AI news cycle is continuous (like .COM was)
| and the attribution of NVDA's +25% romp to the prospects of
| AI grabs the attention of retail investors, who tuned in to
| see AVGO +20% and the likes of MSFT, TSLA, NFLX and GOOG add
| 5% in 2 days. The longer that goes on, the more we'll see
| investors looking for reasons that companies will benefit
| from AI and want to buy in, then, companies that don't have a
| strong AI story will need to get on the train and start
| buying all the AI startups that have materialized over the
| last couple of years. Then, we start seeting AI IPOs with
| increasingly sketchy histories. (sorry, .COM PTSD kicking
| in...)
|
| All this could happen in a weak market. In fact, strong
| returns in AI during a weak overall market will simply call
| more attention to it.
| kgwgk wrote:
| 'At 10 times revenues, to give you a 10-year payback, I have to
| pay you 100% of revenues for 10 straight years in dividends.
| That assumes I can get that by my shareholders. That assumes I
| have zero cost of goods sold, which is very hard for a computer
| company. That assumes zero expenses, which is really hard with
| 39,000 employees. That assumes I pay no taxes, which is very
| hard. And that assumes you pay no taxes on your dividends,
| which is kind of illegal. And that assumes with zero R&D for
| the next 10 years, I can maintain the current revenue run rate.
| Now, having done that, would any of you like to buy my stock at
| $64? Do you realize how ridiculous those basic assumptions are?
| You don't need any transparency. You don't need any footnotes.
| What were you thinking?'-- Scott McNealy, Business Week, 2002
|
| https://www.bloomberg.com/news/articles/2002-03-31/a-talk-wi...
| bluecalm wrote:
| I mean it's completely wrong though as you still have shares
| in the company after getting paid.
|
| A very good rule of thumb is: if someone's mentions dividends
| when discussing valuation they are clueless. It doesn't
| always work (paying high dividends has implications ranging
| from clueless management to political pressure on the
| company) but it's a very good rule that the argument is
| nonsense.
| enos_feedler wrote:
| I did the same s/ai/internet thing yesterday when I asked Bard
| to give me analyst ratings for cisco stock before the dot-com
| crash:
|
| "The highest analyst price target for Cisco stock before the
| dot-com crash was $125 per share. This target was set by
| Merrill Lynch analyst Henry Blodget in April 2000, just as the
| dot-com bubble was beginning to burst. Blodget's target was
| based on his belief that Cisco was well-positioned to benefit
| from the continued growth of the Internet."
|
| I was looking to compare with analyst targets set for NVDA
| yesterday. Analysts now are saying the exact thing about Nvidia
| being able to capture the continued growth of AI:
|
| "JPMorgan set its price target to $500 Wednesday, double its
| previous estimate and among the highest out of the big banks.
| Analyst Harlan Sur said this is the "first massive wave of
| demand in generative AI," with more gains to follow. He
| reiterated his overweight rating on the stock."
|
| The ironic bit of course is that my own research here is
| powered by Bard which probably used an NVDA gpu to train it.
| But even those dot-com analyst calls were probably emailed
| around on equipment sold by Cisco.
|
| If I were holding that stock right now, regardless of how right
| these analysts end up being over the next year or so. I would
| sell today
| bsilvereagle wrote:
| > The ironic bit of course is that my own research here is
| powered by Bard which probably used an NVDA gpu to train it
|
| Google uses in-house TPUs for Bard.
|
| https://www.hpcwire.com/2023/04/10/google-ai-
| supercomputer-s...
| kenjackson wrote:
| It is fairly different though in scope. NVidia clearly is
| making tons of money from AI. Probably after OpenAI they are
| the company most directly impacted by AI.
|
| The .com boom of the late 90s was different. Companies who had
| very little to do with the internet were adding ".com" to their
| name. I was a penny stock trader and that was one of the
| fastest ways companies would increase value -- add ".com" and
| issue a press release about how they plan to be "internet
| enabled" or "create a web presence".
|
| Today most companies aren't getting a bump by talking about AI.
| You don't see Spotify changing their name to Spotify.AI.
| Companies are dabbling in offering AI, e.g., SnapChat, but they
| aren't shifting their whole focus to AI.
|
| Now there is an industry of small companies building on AI, and
| I think that's healthy. A handful will find something of value.
| Going back to the early .com days -- I remember companies doing
| video/movie streaming over the web and voice chats. None of
| those early companies, AFAIK, still exist. But the market for
| this technology is bigger than its ever been.
| fulafel wrote:
| Are there numbers available on current ML applications GPU
| sales volume, is it really a big share of NV revenue?
| Dedicated ML hardware like TPUs would seem to be the logical
| perf/$ competitor longer term, they're so far proprietary but
| so are NV sw and hw after all.
| kenjackson wrote:
| I got this quote from the BBC:
|
| "Figures show its [NVidia] AI business generated around
| $15bn (PS12bn) in revenue last year, up about 40% from the
| previous year and overtaking gaming as its largest source
| of income"
|
| Oddly, in updates to the article they rewrote a lot of it,
| and that line is missing, but you can still see it if you
| search for it.
| fulafel wrote:
| Maybe someone temporarily mistook the main non gaming aka
| datacenter side for being all ml.
| 01100011 wrote:
| You seem to misunderstand the dot com bubble. Sure, there
| were companies like Pets.com and companies adding an 'e' to
| the beginning of their names. But there were also companies
| like Cisco and Sun Microsystems. Companies making profits
| selling real goods needed by the growing internet. Go look up
| those companies and their stock charts. Also, if you think
| random companies aren't mentioning AI to boost their stock
| you haven't been paying much attention.
| [deleted]
| kenjackson wrote:
| I think you misunderstood my point. My thesis was that
| these two eras were different in scope (my first sentence).
| I was pointing out how the .com booms impact was so much
| larger than the current AI boom, in terms of financial
| impact. I wasn't trying to say the .com boom was smaller or
| more well-reasoned. In fact quite the opposite. I don't
| think we've seen comparable spikes to the .com boom yet,
| and you seem to agree.
| killjoywashere wrote:
| Cisco peaked at $77.00 in March 2000. It's currently
| $50.00. In the interim there have been no splits.
|
| Intel peaked at $73.94 in September 2000. It's currently
| $28.99. In the interim there have been no splits.
|
| NVidia has split 5 times (cumulative 48x) since 2000. It
| closed 2000 around $2.92. It is currently $389.93. Totally
| gain 6400x. If you ignore the last 12 months, NVidia's last
| peak was $315 in 2021, for a total gain of 5178x. -ish.
| ztrww wrote:
| I would be comparing how someone like Intel did during
| dot.com instead of Pets.com etc. Of course it far from being
| the same and Intel did struggle in the early 00's but they
| still ended up dominating their market which had significant
| growth in the 20 years after dot.com.
|
| Did Intel ever 'grow' into their massively overvalued
| valuation? No.. their stock never even reached it's
| September, 2000 peak yet.
|
| There is a chance that AMD, Intel, maybe Google etc. catch up
| with Nvidia in a year or two and data center GPUs become a
| commodity (clearly the entry bar should be lower than what it
| was for x86 CPUs back in) and what happens then?
| x3sphere wrote:
| I'm not sure AMD will catch up to Nvidia. Obviously there
| are a lot of traders betting on that right now, given that
| AMD has started to rally in response to Nvidia. However
| after all this time NV still commands like 80% share of the
| gaming GPU market despite AMD often (not always) releasing
| competitive cards. Gaming GPUs are already a commodity -
| why hasn't AMD caught up there?
|
| I mean, maybe it's not a fair comparison but I don't see
| why the datacenter/GPGPU market won't end up the same way.
| Nvidia is notorious for trying to lock in users with
| proprietary tech too, though people don't seem to mind.
| pixl97 wrote:
| >and what happens then?
|
| Most likely, all their prices go up...
|
| I mean, your first instinct is to say, "but how could all
| their prices so up, they'll steal value from each other",
| but that's not necessarily true. If AI starts solving
| useful problems, and especially if it starts requiring
| multi-modality to do so, I would expect the total GPU
| processing demand to increase by 10,000-100,000X that we
| have now.
|
| Now, you're going to say "What's going to pay for this
| massive influx of GPU power by corporations". And my reply
| would be "Corporations not having to pay for your health
| insurance any longer".
| kgwgk wrote:
| > Did Intel ever 'grow' into their massively overvalued
| valuation? No.. their stock never even reached it's
| September, 2000 peak yet.
|
| If you take dividends into account it did break even a few
| years ago, at least in nominal terms.
|
| Cisco and Sun Microsystems may be even better comparables
| though.
| haldujai wrote:
| > There is a chance that AMD, Intel, maybe Google etc.
| catch up with Nvidia in a year or two and data center GPUs
| become a commodity (clearly the entry bar should be lower
| than what it was for x86 CPUs back in) and what happens
| then?
|
| Realistically, there is next to zero chance Intel
| (especially given the Arc catastrophe and foundry
| capabilities) or AMD (laundry list of reasons) catchup
| within 2 years.
|
| Safe bet Google's TPUv5 will be competitive with the H100,
| as the v4 was with the A100, but their offering clearly
| hasn't impacted market share thus far and there is no
| indication Google intends to make their chips available
| outside of GCP.
|
| With that said I also agree the current valuation seems too
| high, but I highly doubt there is a serious near-term
| competitor. I think it is more likely that current growth
| projections are too aggressive and demand will subside
| before they grow into their valuation, especially as the
| space evolves with open source foundation models and
| techniques come out (like LoRA/PEFT) that substantially
| reduce demand for the latest chips.
| vineyardmike wrote:
| > there is no indication Google intends to make their
| chips available outside of GCP.
|
| 1. You can buy mini versions of their chips through Coral
| (coral.ai). But yea, they'd never sell them externally as
| long as there exists a higher-margin advantage to selling
| software on top of them, and chips have supply
| constraints.
|
| 2. Google can sell VMs with the tensor chips attached,
| like GPUs. Most organizations with budgets that'd impact
| things will be using the cloud. If
| Apple/MSFT/AWS/Goog/Meta start serious building their own
| chips, NVidia could be left out of the top end.
| haldujai wrote:
| > Google can sell VMs with the tensor chips attached,
| like GPUs.
|
| They have already been doing this for quite a while now
| and even when offered free via TRC barely anyone uses
| TPUs. There is nothing to suggest that Google as an
| organization is shifting focus to be the HPC cloud
| provider for the world.
|
| As it stands TPU cloud access really seems ancillary to
| their own internal needs.
|
| > If Apple/MSFT/AWS/Goog/Meta start serious building
| their own chips, NVidia could be left out of the top end.
|
| That's a big "if", especially within two years, given
| that this chip design/manufacturing isn't really a core
| business interest for any of those companies (other than
| Google which has massive internal need and potentially
| Apple who have never indicated interest in being a cloud
| provider).
|
| They certainly _could_ compete with Nvidia for the top-
| end, but it would be really hard and how much would the
| vertical integration actually benefit their bottom line?
| A 2048 GPU SuperPOD is what, like 30M?
|
| There's also the risk that the not-always-friendly DoJ
| gets anti-trusty if a cloud provider has a massive
| advantage and is locking the HW in their walled garden.
| fragmede wrote:
| > barely anyone uses TPUs
|
| What are you basing that on? I'm not aware of GCP having
| released any numbers on their usage.
| adventured wrote:
| Arc has already caught up to Nvidia. The latest Nvidia
| GPUs are a disaster (the 4060ti is being universally
| mocked for its very pathetic performance), they're
| intentionally royally screwing their customers.
|
| The A750 and A770 are tremendous GPUs and compete very
| well with anything Nvidia has in those brackets (and
| Intel is willing to hammer Nvidia on price, as witnessed
| by the latest price cuts on the A750). Drivers have
| rapidly improved in the past few quarters. It's likely
| given how Nvidia has chosen to aggressively mistreat its
| customers that Intel will surpass them on value
| proposition with Battlemage.
| haldujai wrote:
| > anything Nvidia has in those brackets
|
| This being the operative part of the statement. If we're
| talking top-end GPUs it's not even close.
|
| > Intel is willing to hammer Nvidia on price
|
| They also have no choice, Intel's spend on Arc has been
| tremendous (which is what I mean by catastrophe,
| everything I've read suggests this will be a huge loss
| for Intel). I doubt they have much taste for another
| loss-leader in datacenter-level GPUs right now, if they
| even have the manufacturing capacity.
| dekhn wrote:
| the 4060ti is an entry board, it's designed to be cheap
| not fast. I believe this pattern was also true for 3060
| and 2060.
| antiherovelo wrote:
| You're talking about consumer grade graphics, not AI
| processing, and you're talking about cheap, not
| performant.
|
| There is no significant competition to the NVIDIA A100
| and H100 for machine learning.
| Xeoncross wrote:
| NVDA is pretty hyped at this point. If you wanted to buy it, then
| fall of last year after it fell 60% was the time.
|
| NVDA has a trailing twelve months (TTM) Price to earnings (P/E)
| ratio of 175x. Based on the latest quarter and forward guidance
| they have a forward-looking P/E ratio of 50x - So the market is
| already expecting (and has priced in) even higher expectations of
| growth than what the stock is already at.
|
| NVDA is expected to at least double their already great growth
| (to get to P/E of 25x) according to the market. I have my doubts.
|
| You can compare this to the historical averages of the S&P 500:
| https://www.multpl.com/s-p-500-pe-ratio
| seydor wrote:
| Please follow this advice people. We are all waiting for the
| dip
| pc86 wrote:
| When I spent time playing individual stocks I actually made
| decent money waiting for big spikes like this, hopping on the
| bandwagon intraday and just taking 1-2% in the hype train.
| It's part day trading part picking up pennies in front of a
| steamroller. The few times I really got burned is getting
| greedy and holding overnight or over a weekend.
|
| I'm really curious to see where NVDA stands on Tuesday
| morning.
| [deleted]
| ksec wrote:
| >Based on the latest quarter and forward guidance they have a
| forward-looking P/E ratio of 50x
|
| I may have missed the news. Where did they mention they are
| going to make 3.5X the profits in their forward guidance or
| forward looking P/E ?
|
| Assuming consumer revenue stays roughly the same, ( crypto
| usage being the largest variable ). Data Center sector has to
| grown at least 6X in revenue.
| Xeoncross wrote:
| They don't set the forward P/E - it's literally what the
| price of the stock the market bid up / actual earnings points
| to. The market is expecting them to double or triple their
| income in the coming quarters/years.
|
| The TTM Price/Earnings ratio is even crazier as the market is
| expecting them to grow revenue 9x from what they made in the
| last year (to get back to a 20x P/E).
| ksec wrote:
| I know the market is hyped but I just dont see how that is
| possible. HN please tell me where I am wrong. The only moat
| Nvidia has is in training. I dont see that disappearing
| anytime soon. At least not in the next 5 - 8 years. However
| I also cant see being training only brings 10x revenue on
| Data Center _every year_. It is not like older GPU are
| throw away after use.
| dubcanada wrote:
| I mean PE is accurate, but let's also not forget that
| hype, and future aspect leads to a PE vastly exceeding
| that of what the market actually expects.
|
| They expect NVDA to not only dominate GPU market, but
| have a break through in AI or contribute to it, which
| would lead to way more money.
|
| Also have to look at the fact, any "AI" portfolio is
| going to be heavily weighted NVDA stock. And people who
| may be hedging against a raise in AI or buying into said
| raise are investing in AI portfolios/ETFs, and thereby a
| portion of that NVDA.
|
| It's not as simple as how the people above are explaining
| it.
| WrtCdEvrydy wrote:
| I had a meme joke about how AI would come to be by making people
| mine for crypto but now we're seeing LLMs take the fore-front of
| AI and causing us to reach for more and more parameters.
|
| It reminds me of when the YOLOv3 model came out and every single
| upgrade just gave us more and more features and capabilities (the
| v8 has auto segmentation).
|
| AMD dropped the ball on this, just like Intel when Ryzen dropped,
| I just don't see a way for them to bring it around.
| machdiamonds wrote:
| I wonder if https://tenstorrent.com/ will be able to take some
| market share away
| abdullin wrote:
| Not any time soon, I believe.
|
| AI hardware is useless without software ecosystem (AMD and
| Intel could tell a story about that).
|
| Latest marketing materials of Tenstorrent tell stories about
| great chips and licensing deals, but not a single word about
| the software side of the things.
|
| Compare that to how much NVidia talks about software on its
| presentations.
| belter wrote:
| NVIDIA PE ratio as of May 25, 2023 is 139.44. If you wanted to
| invest you are too late. Save your money.
| sf4lifer wrote:
| Respect to Jensen, one of the OGs of the valley and a good dude.
| But LLMs (eventually) running on iphone hardware will crater this
| run
| haldujai wrote:
| Not sure how much inference on the edge will impact things
| unless you think we'll hit "peak training" in the near future.
| I would safely wager that most H100 nodes will be used for
| training rather than inference.
| lubesGordi wrote:
| Genuine question to all concerned about PE ratios. Why is PE not
| subject to 'new normals'? A lot of people seem to reject stocks
| because they're 'expensive' which to me seems like a relative
| term. There are a lot more retail investors out there now.
| barumrho wrote:
| It is helpful to think of it in terms of unit economics. If a
| company sells a product, it needs to eventually make profit on
| each unit. (cost < price per unit)
|
| When it comes to owning a share, it eventually needs to make
| the investor money through dividends or price appreciation. The
| argument for high PE ratio is price appreciation (growth), but
| exponential growth is very hard to sustain, so PE ratio has to
| come down to a certain level in the long term. Also, there is
| always a risk of a company declining or even folding.
| JumpinJack_Cash wrote:
| Because if all that is left of the stock market is offloading
| your hand to a bigger fool then it's way more fun to fly to
| Vegas and do it at the poker table.
|
| At least you have to actually look in the eyes the guy you are
| screwing over.
|
| You can't buy stuff because you think other people will also
| buy, that would mean that you are buying/selling opinions not
| companies.
| dekhn wrote:
| Good. nvidia deserves what they're getting, imho, because they
| started early and continued to invest in graphics and then GPUs,
| with support for both Windows and Linux.
| madballster wrote:
| The result of this optimism in big cap tech companies is that
| many smaller cap shares in industries such as financial services,
| insurance or industrial distribution are trading for historically
| cheap valuations. It appears there is very little investor
| interest in them. I think it's a wonderful time to be a long-term
| investor.
| lifefeed wrote:
| The four commas club.
|
| Going to have to buy a car whose doors open in a new way.
| endisneigh wrote:
| The stock price makes absolutely no sense, but the AI hype is
| real so I won't be shorting.
|
| Just give you a crude metaphor - buying NVDA is like buying a $10
| million dollar house to collect $10,000 in rent a year. The price
| to earnings is bonkers. This valuation only makes sense if
| somehow Nvidia is using alien technology that couldn't possible
| by reproduced in the next two decades by any other company.
| dpflan wrote:
| Agreed, the fundamentals are off and drunk on hype. Where else
| can investors put their money?
| peab wrote:
| Goog, Meta? They are by far leading in AI research, and
| they've both developed their own chip. Apple is also going to
| come out ahead with their Neural chip - imagine chatGPT and
| stable diffusion becoming part of the iOS SDK
| haldujai wrote:
| Meta has their own chip they actually use? IIRC LLaMA was
| trained on A100s.
|
| Apple is non-viable for LLM workloads.
| amanj41 wrote:
| I would add MSFT due to their exposure to OpenAI and their
| very successful previous and upcoming integrations (GH Co-
| Pilot, Bing revamp, upcoming Excel Co-Pilot etc.)
| dpflan wrote:
| Yes, I can imagine commoditization of such models. Plus
| they own their chips/silicon and have billions of devices
| deployed. I think Apple is one to watch because UX is
| challenge for AI integration. ChatGPT made UX for LLMs user
| friendly. Apple's design history is superior to its
| competitors.
| lostmsu wrote:
| TSMC's P/E is under 20. Disclaimer: I hold
| chessgecko wrote:
| TSMC is cheap because of the risk of invasion, otherwise
| they would have a pretty insane valuation already.
| dpflan wrote:
| How is their fab build in AZ going?
| HDThoreaun wrote:
| Not incredible I hear, but that could just be Morris
| trying to get more subsidies.
| YetAnotherNick wrote:
| P/E is under 20 is the norm, not the exception. Even
| companies like meta, apple etc. had pe near 10 for long
| time.
| lostmsu wrote:
| Yes, but also TSMC is the chip manufacturer that makes
| NVidia's GPU compute units.
| Tiktaalik wrote:
| Your last point is why it feels to me like the better
| investment right now is everyone and anyone else that will be
| working very hard to be at where NVDA is at right now. I
| suppose the obvious answer here is AMD, but surely there's
| other minor companies too that could see a huge amount of
| investment.
| rapsey wrote:
| Every tech giant is sprinting into this space. I highly doubt
| nVidia will still have a moat as big in 12 months.
| SkipperCat wrote:
| One thing that Nvidia has going for it is the stickiness of CUDA.
| Developers don't have a preference for GPUs, they have a
| preference for the programming stacks that are associated with
| them. Why is Google's Tenserflow not as popular?, probably
| because everyone has deep experience with CUDA and it would be a
| pain to migrate.
|
| Microsoft Office rode the same type of paradigm to dominate the
| desktop app market.
| seydor wrote:
| frameworks can be agnostic to the underlying library. What are
| formidable alternatives to cuda ?
| [deleted]
| singhrac wrote:
| Sorry, I have to point out: Tensorflow is not comparable to
| CUDA. Tensorflow is a (arguably) high level library that links
| against CUDA to run on NVIDIA GPUs, as does PyTorch (the main
| competitor).
|
| Comparatively few people have "deep" experience with CUDA
| (basically Tensorflow/Pytorch maintainers, some of whom are
| NVIDIA employees, and some working in HPC/supercomputing).
|
| CUDA is indeed sticky, but the reason is probably because CUDA
| is supported on basically every NVIDIA GPU, whereas AMD's ROCm
| was until recently limited to CDNA (datacenter) cards, so you
| couldn't run it on your local AMD card. Intel is trying the
| same strategy with oneAPI, but since no one has managed to see
| a Habana Gaudi card (let alone a Gaudi2), they're totally out
| of the running for now.
|
| Separately, CUDA comes with many necessary extensions like
| cuSparse, cuDNN, etc. Those exist in other frameworks but
| there's no comparison readily available, so no one is going to
| buy an AMD CDNA card.
|
| AMD and Intel need to publish a public accounting of their
| incompatibilities with PyTorch (no one cares about Tensorflow
| anymore), even if the benchmarks show that their cards are
| worse. If you don't measure in the public no one will believe
| your vague claims about how much you're investing into the AI
| boom. Certainly I would like to buy an Intel Arc A770 with 16GB
| of VRAM for $350, but I won't, because no one will tell me that
| it works with llama.
| digitallyfree wrote:
| Theoretically the ARC should work with llama.cpp using
| OpenCL, but I haven't seen benchmarks or even a confirmation
| that it works.
| somethoughts wrote:
| With respect to the incompatabilities with PyTorch and
| TensorFlow - given that the AMD and Intel GPU drivers are
| more likely to be open sourced - do you believe the open
| source community or a third party vendors will step in to
| close the gap for AMD/Intel?
|
| It would seem a great startup idea with the intent to get
| acqui-hired by AMD or Intel to get into the details of these
| incompatibilities and/performance differences.
|
| At worst it seems you could pivot into some sort of passive
| income AI benchmarking website/YT channel similar to the ones
| that exist for Gaming GPU benchmarks.
| pca006132 wrote:
| This kind of software development is hard and expensive. I
| do not think that this can enable you to make enough income
| from benchmark website or YT channel, considering most
| people are not interested in those low level details.
| singhrac wrote:
| I think this is what George Hotz is doing with tiny corp,
| but I have to admit I have little hope. Making asynchronous
| SIMD code fast is very difficult as a base point, let alone
| without internal view of decisions like "why does this
| cause a sync" or even "will this unnecessary copy ever get
| fixed?". Unfortunately AMD and especially Intel don't
| "develop in the open", so even if the drivers are open
| sourced, without context it'll be an uphill battle.
|
| To give some perspective, see @ngimel's comments and PRs in
| Github. That's what AMD and Intel are competing against,
| along with confidence that optimizing for ML customers will
| pay off (clearly NVIDIA can justify the investment
| already).
| eslaught wrote:
| Drivers are only the lowest level of the stack. You could
| (in principle) have a great driver ecosystem and a
| nonexistent user-level ecosystem. And indeed, the user-
| level ecosystem on AMD and Intel seems to be suffering.
|
| For example, I recently went looking into Numba for AMD
| GPUs. The answer was basically, "it doesn't exist". There
| was a version, it got deprecated (and removed), and the
| replacement never took off. AMD doesn't appear to be
| investing in it (as far as anyone can tell from an
| outsider's perspective). So now I've got a code that won't
| work on AMD GPUs, even though in principle the abstractions
| are perfectly suited to this sort of cross-GPU-vendor
| portability.
|
| NVIDIA is years ahead not just in CUDA, but in terms of all
| the other libraries built on top. Unless I'm building
| directly on the lowest levels of abstraction
| (CUDA/HIP/Kokkos/etc. and BLAS, basically), chances are the
| things I want will exist for NVIDIA but not for the others.
| Without a significant and sustained ecosystem push, that's
| just not going to change quickly.
| oktwtf wrote:
| This has always been in the back of my mind anytime AMD has
| some new GPUs with nice features. Gamers will say this will be
| where AMD will win the war. But I fear the war is already won
| on the compute that counts, and right now that's CUDA accel on
| NVIDIA.
| paulmd wrote:
| The only real remaining fronts in the war are consoles and
| smartphones, and NVIDIA just signed a deal to license GeForce
| IP to mediatek so that nut is being cracked as well, mediatek
| gives them mass-market access for CUDA tech, DLSS, and other
| stuff. Nintendo has essentially a mobile console platform and
| will be doing DLSS too on an Orin NX 8nm chip soon (very
| cheap) using that same smartphone-level DLSS (probably re-
| optimized for lower resolutions). Samsung 8nm is exactly
| Nintendo's kind of cheap, it'll happen.
|
| The "NVIDIA they might leave graphics and just do AI in the
| future!" that people sometimes do is just such a batshit take
| because it's graphics that opens the door to all these
| platforms, and it's graphics that a lot of these accelerators
| center around. What good is DLSS without a graphics platform?
| Do you sign the Mediatek deal without a graphics platform? Do
| you give up workstation graphics and OptiX and raysampling
| and all these other raytracing techs they've spent billions
| developing, or do you just choose to do all the work of
| making Quadros and all this graphics tech but then not do
| gaming drivers and give up that gaming revenue and all the
| market access that comes with it? It's faux-intellectualism
| and ayymd wish-casting at its finest, it makes zero sense
| when you consider the leverage they get from this R&D spend
| across multiple fields.
|
| CUDA is unshakeable precisely because NVIDIA is absolutely
| relentless in getting their foot in the door, then using that
| market access to build a better mousetrap with software that
| everyone else is _constantly_ rushing to catch up to. Every
| segment has some pain points and NVIDIA figures out what they
| are and where the tech is going and builds something to
| address that. AMD 's approach of trying to surgically tap
| high-margin segments before they have a platform worth caring
| about is _fundamentally flawed_ , they're putting the cart
| before the horse, and that's why they keep spinning their
| wheels on GPGPU adoption for the last 15 years. And that's
| what people are clamoring for NVIDIA to do with this idea of
| "abandon graphics and just do AI" and it's completely
| batshit.
|
| Intel gets it, at least. OneAPI is focused on being a viable
| product and they'll move on from there. ROCm is designed for
| supercomputers where people get paid to optimize for it -
| it's an _embedded product_ , not a platform. Like you can't
| even use the binaries you compile on anything except one
| specific die (not even a generation, "this is binary is for
| Navi 21, you need the Navi 23 binary"). CUDA is an ecosystem
| that people reach for because there's tons of tools and
| libraries and support, and it works seamlessly and you can
| deliver an actual product that consumers can use. ROCm is
| something that your boss tells you you're going to be using
| because it's cheap, you are paying to engineer it from
| scratch, you'll be targeting your company's one specific
| hardware config, and it'll be inside a web service so it'll
| be invisible to end-users anyway. It's an embedded processor
| inside some other product, not a product itself. That's what
| you get from the "surgically tap high-margin segments"
| strategy.
|
| But the Mediatek deal is big news. When we were discussing
| the ARM acquisition etc people totally scoffed that NVIDIA
| would _ever_ license GeForce IP. And when that fell through,
| they went ahead and did it anyway. Because platform access
| matters, it 's the foot in the door. The ARM deal was never
| about screwing licensees or selling more tegras, that would
| instantly destroy the value of their $40b acquisition. It was
| 100% always about getting GeForce as the base-tier graphics
| IP for ARM and getting that market access to crack one of the
| few remaining segments where CUDA acceleration (and other
| NVIDIA technologies) aren't absolutely dominant.
|
| And graphics is the keystone of all of it. Market access,
| software, acceleration, all of it falls apart without the
| graphics. They'd just be ROCm 2.0 and nobody wants that, not
| even AMD wants to be ROCm. AMD is finally starting to see it
| and move away from it, it would be wildly myopic for NVIDIA
| to do that and Jensen is not an idiot.
|
| Not entirely a direct response to you but I've seen that
| sentiment a ton now that AI/enterprise revenue has passed
| graphics and it drives me nuts. Your comment about "what
| would it take to get Radeon ahead of CUDA mindshare" kinda
| nailed it, CUDA literally is winning so hard that people are
| fantasizing about "haha but what if NVIDIA got tired of
| winning and went outside to ride bikes and left AMD to
| exploit graphics in peace" and it's crazy to think that could
| _ever_ be a corporate strategy. Why would they do that when
| Jensen has spent the last 25 years building this graphics
| empire? Complete wish-casting, "so dominant that people can't
| even imagine the tech it would take to break their ubiquity"
| is exactly where Jensen wants to be, and if anything they are
| still actively pushing to be _more_ ubiquitous. That 's why
| their P/E are insane (probably overhyped even at that, but
| damn are they good).
|
| If there is a business to be made doing only AI hardware and
| not a larger platform (and I don't think there is, at that
| point you're a commodity like dozens of other startups) it
| certainly looks nothing like the way nvidia is set up. These
| are all interlocking products and segments and software, you
| can't cut any one of them away without gutting some other
| segment. And fundamentally the surgical revenue approach
| doesn't work, AMD has continuously showed that for the last
| 15 years.
|
| Being unwilling to catch a falling knife by cutting prices to
| the bone doesn't mean they don't want to be in graphics. The
| consumer GPU market is just unavoidably soft right now,
| almost irregardless of actual value (see: 4070 for $600 with
| a $100 GC at microcenter still falling flat). Even $500 for a
| 4070 is probably flirting with being unsustainably low (they
| need to fund R&D for the next gen out of these margins) but
| if a de-facto $500 price doesn't spark people's
| interests/produce an increase in sales they're absolutely not
| going any lower than that this early in the cycle. They'll
| focus on margin on the sales they can actually make, rather
| than chasing the guy who is holding out for 4070 to be $329.
| People don't realize it but obstinently refusing to buy at
| any price (even a good deal) is paradoxically creating an
| incentive to just ignore them and chase margins.
|
| It doesn't mean they don't want to be in that market but
| they're not going to cut their own throat, mis-calibrate
| consumer expectations, etc.
|
| Just as AMD is finding out with the RX 7600 launch - if you
| over-cut on one generation, the next generation becomes a
| much harder sell. Which is the same lesson nvidia learned
| with the 1080 ti and 20-series. AMD is having their 20-series
| moment right now, they over-cut on the old stuff and the new
| stuff is struggling to match the value. And the expectations
| of future cuts is only going to dampen demand further,
| they're Osborne Effect'ing themselves with price cuts
| everyone knows are coming. Nvidia smartened up - if the
| market is soft and the demand just isn't there... make less
| gaming cards and shift to other markets in the meantime.
| Doesn't mean they don't want to be in graphics.
| AnthonyMouse wrote:
| This has been the case for a while because AMD never had the
| resources to do software well. But their market cap is 10x
| what it was 5 years ago, so now they do. That still takes
| time, and having resources isn't a guarantee of competent
| execution, but it's a lot more likely now than it used to be.
|
| On top of that, Intel is making a serious effort to get into
| this space and they have a better history of making usable
| libraries. OpenVINO is already pretty good. It's especially
| good at having implementations in both Python and not-Python,
| the latter of which is a huge advantage for open source
| development because it gets you out of Python dependency
| hell. There's a reason the thing that caught on is llama.cpp
| and not llama.py.
| dogma1138 wrote:
| AMDs problem with software goes well beyond people they
| can't stick with anything for any significant length of
| time and the principal design behind ROCm is doomed to fail
| as it compiles hardware specific binaries and offers no
| backward or forward compatibility.
|
| CUDA compiles to hardware agnostic intermediary binaries
| which can run on any hardware as long as the target feature
| level is compatible and you can target multiple feature
| levels with a single binary.
|
| CUDA code compiled 10 years ago still runs just fine, ROCm
| require recompilation every time the framework is updated
| and every time a new hardware is released.
| AnthonyMouse wrote:
| That's all software. There is nothing but resources
| between here and a release of ROCm that compiles existing
| code into a stable intermediate representation, if that's
| something people care about. (It's not clear if it is for
| anything with published source code; then it matters a
| lot more if the new version can compile the old code than
| if the new hardware can run the old binary, since it's
| not exactly an ordeal to hit the "compile" button once or
| even ship something that does that automatically.)
| jejeyyy77 wrote:
| Tensorflow is optimized for TPU's which isn't really consumer-
| grade hardware.
| RobotToaster wrote:
| Isn't the coral stick a TPU?
| giobox wrote:
| Yes, although availability recently has been pretty bad
| following the chip shortage, and prices skyrocketed to ~300
| dollars. Not sure if situation returning to normal yet.
| Similar woes to the Raspberry Pi etc.
|
| I needed two for a project and ended up paying a lot more
| than I wanted for used ones.
|
| For those not familiar, consumer/hobbyist grade TPUs:
|
| https://coral.ai/products/
| joseph_grobbles wrote:
| Google's first TPU was developed a year after Tensorflow. And
| for that matter, Tensorflow works fine with CUDA, was
| originally _entirely_ built for CUDA, and it 's super weird
| the way it's being referenced in here.
|
| Tensorflow lost out to Pytorch because the former is grossly
| complex for the same tasks, with a mountain of dependencies,
| as is the norm for Google projects. Using it was such a
| ridiculous pain compared to Pytorch.
|
| And anyone can use a mythical TPU right now on the Google
| Cloud. It isn't magical, and is kind of junky compared to an
| H100, for instance. I mean...Google's recent AI supercomputer
| offerings are built around nvidia hardware.
|
| CUDA keeps winning because everyone else has done a
| horrendous job competing. AMD, for instance, had the rather
| horrible ROCm, and then they decided that they would gate
| their APIs to only their "business" offerings while nvidia
| was happy letting it work on almost anything.
| HellDunkel wrote:
| Best explanation so far. I am surprised OpenCL never gained
| much traction. Any idea why?
| blihp wrote:
| The same reason most of AMD's 'open' initiatives don't
| gain traction: they throw it out there and hope things
| will magically work out and that a/the community will
| embrace it as the standard. It takes more work than that.
| What AMD historically hasn't done is the real grunge work
| of addressing the limitations of their products/APIs and
| continuing to invest in them long term. See how the
| OpenCL (written by AMD) Cycles renderer for Blender
| worked out, for example.
|
| Something AMD doesn't seem to understand/accept is that
| since they are consistently lagging nVidia on both the
| hardware and software front, nVidia can get away with
| some things AMD can't. Everyone hates nVidia for it, but
| unless/until AMD wises up they're going to keep losing.
| caeril wrote:
| Unrelated question for the HN experts:
|
| My sibling commenter is shadowbanned, but if you look into
| their comment history, there are occasionally comments that
| are not dead. How does this happen?
| philipkglass wrote:
| Somebody clicked on the timestamp of that post and used the
| "vouch" link to unhide it. I sometimes do that for comments
| from new accounts that been hidden by some overzealous
| anti-spam heuristic.
| RobotToaster wrote:
| Helpful to know, I've seen a few hidden posts that seem
| reasonable but didn't know I could do that.
| [deleted]
| traveler01 wrote:
| So NVIDIA will be to AI what Adobe is to media production: An
| absolute cancer.
| rybosworld wrote:
| I think this 20%+ stock move is mostly a combination of:
|
| 1) Heavy short option interest going into earnings
|
| 2) A large beat announced in after hours
|
| Major market players can take advantage of large earnings
| surprises by manipulating a stock in after hours. It is possible
| to trigger very large movements with very little volume because
| most participants don't have access to after hours trading.
|
| When the market opens the next day the "false" gains should
| typically be wiped out _unless_ the move is large enough to force
| the closing of certain positions. In this case, it looks like
| there was a clamor to escape long puts and short calls.
| dpflan wrote:
| Yes, the affect of short positions during price jumps is not
| always discussed / hidden variable.
| cma wrote:
| Probably more from 50% growth guidance they gave for next
| quarter and how much that beat expectations.
| dpflan wrote:
| This post from Fullstack Deeplearing analyzed cloud GPUs, seems
| pertinent to discussions here about NVIDIA, competitors, and
| determining true value of related AI/chip stocks:
| https://fullstackdeeplearning.com/cloud-gpus/
|
| - HN post: https://news.ycombinator.com/item?id=36025099
| peepeepoopoo7 wrote:
| I don't understand. Haven't _graphics_ cards basically been
| obsolete for deep learning since the first TPUs arrived on the
| scene in ~2016? Lots of companies are offering TPU accelerators
| now, and it seems like the main thing Nvidia has going for it is
| momentum. But that doesn 't explain this kind of valuation that's
| hundreds of times greater than their earnings. Personally, it
| seems a lot like Nvidia is to 2023 what Cisco was to 2000.
| 0xcde4c3db wrote:
| For at least some applications, the details of the processor
| architecture are dominated by how much high-throughput RAM you
| can throw at the problem, and GPUs are by far the cheapest and
| most accessible way of cramming a bunch of high-throughput RAM
| into a computer. While it's not exactly a mainstream solution,
| some people have built AI rigs in 2022/2023 with used Vega
| cards because they're a cheap way to get HBM.
| jandrese wrote:
| The thing that all of these hardware companies don't understand
| is that it is the software that keeps the boys in the yard. If
| you don't have something that works as well as CUDA then it
| doesn't matter how good your hardware is. The only company that
| seems to understand this is nVidia, and they are the ones
| eating everyone's lunch. The software side is hard, it is
| expensive, it takes loads of developer hours and real life
| years to get right, and it is necessary for success.
| popinman322 wrote:
| 1xH100 is faster than 8xA100 for a work-in-progress
| architecture I'm iterating on. Meanwhile the code doesn't work
| on TPUs right now because it hangs before initialization. (This
| is with PyTorch for what it's worth) All that to say Nvidia's
| hardware work and software evangelization has really paid off--
| CUDA just works(tm) and performance continues to increase.
|
| TPUs are good hardware, but TPUs are not available outside of
| GCP. There's not as much of an incentive for other companies to
| build software around TPUs like there is with CUDA. The same is
| likely true of chips like Cerebras' wafer scale accelerators as
| well.
|
| Nvidia's won a stable lead on their competition that's probably
| not going to disappear for the next 2-5 years and could
| compound over that time.
| YetAnotherNick wrote:
| For transformer, v4 chip has 70-100% compute capacity and 40%
| memory of A100 for pretty much the same price. The only
| benefit is better networking speed for TPU compared to GPU
| cluster, allowing very large models to scale better, where
| for GPU model need to fit in NVlink connected GPUs, which is
| 320 billion parameters for 8*80 GB A100.
| haldujai wrote:
| > For transformer, v4 chip has 70-100% compute capacity and
| 40% memory of A100 for pretty much the same price.
|
| Note there are added costs when using V4 nodes such as the
| VM, storage and logging which can get $$$.
|
| > where for GPU model need to fit in NVlink connected GPUs
|
| Huh, where is this coming from? You can definitely
| efficiently scale transformers across multiple servers with
| parallelism and 1T is entirely feasible if you have the $.
| Nvidia demonstrated this back in 2021.
| YetAnotherNick wrote:
| > Nvidia demonstrated this back in 2021.
|
| Because Nvidia created a supercomputer with A100, with
| lot of focus for networking. Cloud providers don't give
| that option.
| haldujai wrote:
| Azure and AWS have both offered high-bandwidth cluster
| options that allow scaling beyond a single server for
| several months now.
|
| Pretty sure MosaicML also does this but I haven't used
| their offering.
|
| https://www.amazon.science/blog/scaling-to-trillion-
| paramete...
| nightski wrote:
| It baffles me to this day that Google never made TPUs more
| widely available. Then again it is Google...
| ls612 wrote:
| They probably saw TPUs as their moat...
| haldujai wrote:
| This has been my experience as well with TPUs and A100s. I
| haven't used H100s yet (OOM on 1) but I believe the training
| throughout benchmarks from Nvidia on transformer workloads is
| 2.5x from A100s.
|
| The effort to make (PyTorch) code run on TPUs is not worth it
| and my lab would rather rent (discounted) Nvidia GPUs than
| use free TRC credits we have at the moment. Additionally, at
| least in Jan 2023 when I last tried this, PyTorch XLA had a
| significant reduction in throughput so to really take
| advantage you would probably need to convert to Jax/TF which
| are used internally at Google and better supported.
| pjc50 wrote:
| People underestimate how meme-y both the stock market and the
| underlying customer market is. I don't think there's anything
| like the level of TPUs shipping as there are GPUs? If people
| end up with an "AI accelerator" in their PC, it would be quite
| likely to have NVIDIA branding.
| RobotToaster wrote:
| I've been wondering the same. With crypto we saw the adoption
| of ASICs pretty quickly, you would think we would see the same
| with AI.
| dahart wrote:
| This quote from Wednesday's TinyCorp article seems apropos:
|
| "The current crop of AI chip companies failed. Many of them
| managed to tape out chips, some of those chips even worked. But
| not a single one wrote a decent framework to use those chips.
| They had similar performance/$ to NVIDIA, and way worse
| software. Of course they failed. Everyone just bought stuff
| from NVIDIA."
|
| https://geohot.github.io//blog/jekyll/update/2023/05/24/the-...
| atonse wrote:
| I am not an ML expert but as an observer, others have said that
| why nvidia got right from the beginning, was actually the
| software support. Stuff like CUDA and good drivers and
| supporting libraries from over a decade ago? All the libs and
| researchers and all just use those libs and write software
| towards it. And as a result it works best on nvidia cards.
| reesul wrote:
| As someone closer to this in the industry (embedded ML.. and
| trying to compete) I agree with the sentiment. Their software
| is good, I willingly admit. Porting a model to embedded is
| hard. With NVIDIA, you basically don't have to port. This has
| paid dividends for them, pun not intended.
|
| I don't really see the Nvidia monopoly on ML training
| stopping anytime soon.
| peepeepoopoo7 wrote:
| But their valuation is based on _forward_ (future) earnings,
| using an already obsolete technology.
| tombert wrote:
| Correct me if I'm wrong, but isn't OpenAI still using a ton
| of Nvidia tech behind the scenes? In addition to GPUs
| doesn't Nvidia also have dedicated ML hardware?
| jjoonathan wrote:
| Yes, and all that CUDA software is effectively a moat.
| ROCm exists but after getting burned badly and repeatedly
| by OpenCL I'm disinclined to bet on it. At best, my
| winnings would be avoiding the green tax, at worst, I
| waste months like I did on OpenCL.
|
| That said, AMD used to be in a dire financial situation,
| whereas now they can afford to fix their shit and
| actually give chase. NVIDIA has turned the thumb screws
| very far and they can probably turn them considerably
| further before researchers jump, but far enough to
| justify 150x? I have doubts.
| tombert wrote:
| The fact that I hadn't even heard of ROCm until reading
| your post indicates they got a long way to go to catch
| up. I've heard of OpenCL but I don't know anyone who
| actually uses it. I think Apple has something for GPGPU
| for Metal with performance shaders or compute shaders,
| but I also don't know of anyone using it for anything, at
| least not in ML/AI.
|
| It's a little irritating that Nvidia has effectively
| monopolized the GPGPU market so effectively; a part of me
| wonders if the best that that AMD could do is just make a
| CUDA-compatibility layer for AMD cards.
| jjoonathan wrote:
| If you look at the ROCm API you'll see that it's pretty
| much exactly that, a CUDA compatibility layer, but an
| identical API means little if the functionality behind it
| has different quirks and caveats and that's harder to
| assess. I am rooting for ROCm but I can't justify betting
| on it myself and I suspect most of the industry is in the
| same boat. For now.
| atonse wrote:
| My question is, is it feasible for AMD to build an ahead
| of time compiler that transparently translates CUDA
| instructions into whatever AMD could have, so things Just
| Work(tm)? They'd be heavily incentivized to do so. Or
| even put hardware CUDA translators directly into the
| cards?
|
| Or am I misunderstanding CUDA? I think of it as something
| like OpenGL/DirectX.
| nightski wrote:
| In what universe is it obsolete?
| jjoonathan wrote:
| In the universe of peepeepoopoo7.
| rfoo wrote:
| It is more expensive (in engineering cost) to port all the
| world's research (and your own one) to the TPU of your choice,
| than just paying NVIDIA.
| villgax wrote:
| Not one provider apart from GCP has TPUs, they don't have them
| available for consumers to buy & experiment with. No-one
| experiments multi-day stuff on the cloud without big pockets or
| company money especially not PhD students or hobbyists
| peepeepoopoo7 wrote:
| AWS has their own accelerators, and they're a much better
| value than their GPU instances.
| villgax wrote:
| Good luck getting frameworks & academics to cater to test &
| export for this arch
| tverbeure wrote:
| A quick google search brings up "AWS Trainium". But it's
| telling that I had never heard of it. And like TPUs, you
| can't plug in a smaller version in your desktop PC.
| impish9208 wrote:
| https://archive.is/4XeAa
| tombert wrote:
| This is the only time in my entire life that I accurately
| predicted the stock market.
|
| About two months ago, I bought three shares of Nvidia stock. I
| noticed that no one appears to be doing serious AI/ML research
| with AMD hardware, and I also noticed that Nvidia's stock hadn't
| spiked yet with the rise of ChatGPT and Stable Diffusion.
|
| For once I was actually right about something in the stock
| market...About a dozen more accurate predictions and I'll finally
| make up the money I lost from cryptocurrency.
| kgwgk wrote:
| > About two months ago [...] and I also noticed that Nvidia's
| stock hadn't spiked yet with the rise of ChatGPT and Stable
| Diffusion.
|
| Two months ago the stock was 60% up since ChatGPT was released
| and 150% up since October's low.
| dpflan wrote:
| AMD had a price jump on the same date range. Yes NVIDIA is
| higher but it was already so in absolute terms.
|
| - NVIDIA:
| https://www.google.com/finance/quote/NVDA:NASDAQ?sa=X&ved=2a...
|
| - AMD:
| https://www.google.com/finance/quote/AMD:NASDAQ?sa=X&ved=2ah...
| tombert wrote:
| Yeah but I didn't buy AMD stock! Clearly I should have
| though.
| dpflan wrote:
| For sure, still, congrats on some gains.
| cspada wrote:
| And NVDA added more than AMDs entire market cap on that day
| alone, including the gains AMD had.
| gumballindie wrote:
| Doesnt that mean that amd has room to grow faster than
| nvidia since nvidia is already high?
| dmead wrote:
| This is what kept me from selling my shares when they were
| down.
| itsoktocry wrote:
| > _I also noticed that Nvidia 's stock hadn't spiked yet with
| the rise of ChatGPT and Stable Diffusion._
|
| The stock is was (and _is_ ) extremely expensive. Good luck to
| all buying this things at 30x sales. It doesn't make any sense.
| tombert wrote:
| I suppose I could sell now and be happy with my $350
| profit...
| shrimp_emoji wrote:
| The problem with gamba is that 99% of people quit right before
| making it big.
|
| https://youtu.be/y28Diszaoo4
| jakeinspace wrote:
| Still kicking myself for selling the ~20 shares I bought in
| high school for a mere 100% gain back when the price was $40.
| tombert wrote:
| Hey, a win is a win, and profit is profit!
| edgyquant wrote:
| I bought 100 shares of AMD for 8$ and sold at 16$ thinking I
| was a genius
| neogodless wrote:
| I bought 1000 shares of AMD for $2.50 and sold them at $4
| and thought I was pretty cool.
| pc86 wrote:
| As others have intimated, you'll never go bankrupt by selling
| [for a profit] too early. I put $10k into bitcoin when it was
| $1500, and sold it when it hit $4k. Yeah, I can do the math
| at $60k and feel bad, but realistically if I had the
| mentality to hold it from $1500 to $60k I would have waited
| for it to hit $80k and I'd feel objectively worse today about
| those paper losses, albeit with a little bit more money in
| the bank.
|
| At the end of the day doubling your money is exceedingly
| rare, especially on any single security, no sense feeling bad
| you didn't 10x it.
| mikestew wrote:
| You made a profit, no self-kicking allowed. Otherwise Old Man
| Mikestew is going to tell us stories about how he rode $STOCK
| all the way up, _didn 't_ sell, and rode it all the way back
| down to where he bought it, and then watched it dip below
| that. (Please note that "stories" is plural; slow learner, he
| is.)
|
| Don't be like me: never, never, never, never feel bad about
| selling shares for a profit. Sell it, go one about your day
| (IOW, _quit looking at $STOCK_ , you don't own it anymore),
| take the spouse/SO out for a nice dinner if you made some
| serious bank.
|
| Which reminds me that now might be a good time to unload that
| NVDA I've been holding. I'm not _completely_ unteachable.
| cinntaile wrote:
| Just don't unload all of them. In case the stock goes
| bananas for whatever reason.
| mikestew wrote:
| What I'm really going to do is put a trailing stop[0] on
| it in case it _does_ continue to go bananas, and the stop
| can catch it on the way back down in case the banana
| scenario doesn 't happen. :-)
|
| [0] Somewhat oversimplified for discussion purposes.
| esotericimpl wrote:
| I'm still holding ~600 shares of NVDA from a $1,200
| investment back when I thought it was amazing that this
| company made cards that made Quake 2 look incredible.
|
| 300k Investment on $900 dollars.
| tombert wrote:
| I suppose it's good you didn't invest in 3dfx then.
| binarysolo wrote:
| Just... don't. Enjoy the profits when you get 'em. :)
|
| This forum is filled with people who sold all sorts of tech
| stocks way too early (or too late), and people nerding out
| over things and tossing them and them magically gaining tons
| of value over time - I'm thinking about all of my super early
| CCGs that I tossed when cleaning house, the 20 bitcoin I
| mined for fun way back in 2012 or whenever and then deleted
| from my laptop (that I then sold on eBay for $100), the 10k
| of AAPL I bought for like $5 and sold for $10, etc. etc.
|
| Same with all the early job opps and what not too - but we're
| the sum of our life choices till now and that's OK. :)
| gumballindie wrote:
| I'd also make a bet on the underdogs - for instance amd is only
| a devent software update away from snatching market share away
| from nvidia. I am surprised they are not hiring devs like crazy
| right now to beef up their ai gpu capability.
| rcme wrote:
| Just out of curiosity, what was your crypto hypothesis?
| beaned wrote:
| But how funny would it be if AI ends up pumping crypto by it
| being the money it can manage directly and instantly from your
| computer?
| ksec wrote:
| >and I also noticed that Nvidia's stock hadn't spiked yet with
| the rise of ChatGPT and Stable Diffusion.
|
| I think plenty have noticed, But cant get heads around
| investing in a company with 150x PE.
| qeternity wrote:
| This is because you're looking at trailing P/E instead of
| forward.
|
| NVDA forward P/E is still eye watering, but it's much lower.
| ksec wrote:
| Forward P/E aren't that much difference. Even when you are
| expecting 20%+ Net Income YoY. I dont see it being "much
| lower".
| danielmarkbruce wrote:
| People are expecting more than 20% net income increase.
|
| Whenever a PE looks expensive people are expecting very
| large increases in E. They aren't just trying to buy
| expensive stuff.
| yCombLinks wrote:
| Many, many people are not looking at PE at all. They are
| buying into momentum alone, or just buying because it's
| in the news a lot. AI may change the world, but it's
| currently a bubble figuring out how to inflate.
| danielmarkbruce wrote:
| It's implied in the above is that it's for anyone paying
| attention to E. Practically everyone understands that for
| every conceivable reason there is a buyer buying for that
| reason somewhere.
|
| You may think it's a bubble. It's not obviously a bubble
| to anyone who understands the current capabilities and
| how locked in nvidia is with their gpus and cuda. It
| might end up being expensive in retrospect. It might not.
| sdfghswe wrote:
| Doesn't make any difference whatsoever.
| intelVISA wrote:
| Yep as soon as SD dropped I went bullish on NVDA.
|
| What's funny is that we on HN know there's no magic inside
| these chips, a sufficiently smart foundry could easily cripple
| Nvidia overnight... yet where's the nearest VC fund for ASICs??
| kramerger wrote:
| Did you say foundry? Nations (including US and Germany) have
| tried to out smart TSMC, and yet here we are.
|
| If you meant outsmart nvidia, Google's TPU is already more
| efficient but a GPU is much more than an efficient design .
| [deleted]
| the88doctor wrote:
| I hope you sold your Nvidia and locked in those profits.
| villgax wrote:
| I mean the 5yr licensed that come bundled with H100 just because
| you technically aren't supposed to use consumer class GPU in a
| data center... whoever came up with this is definitely following
| in the footsteps of Adobe's licensing shenanigans
| oefrha wrote:
| It's not like consumers are getting great deals. They're
| milking the mid-range real hard, see recent 4070 and 4060
| releases.
| mensetmanusman wrote:
| Don't hate the neurons? hate the game.
| nixcraft wrote:
| Meanwhile, Nvidia Short Sellers Lose $2.3 Billion in One Day as
| Stock Soars
| https://www.bloomberg.com/news/articles/2023-05-25/nvidia-sh...
| dbcurtis wrote:
| Shorting into an irrational bull run is a great way to learn an
| expensive lesson on the difference in power between logic and
| emotion.
| methodical wrote:
| "The market can stay irrational longer than you can stay
| solvent." - John Maynard Keynes
|
| Particularly applicable here, couldn't resist myself.
| mbesto wrote:
| The super interesting thing about NVDA is that you can bet on:
|
| - Crypto
|
| - AI
|
| - Gaming / Entertainment
|
| - Self driving cars
|
| - VR / Metaverse (whatever that is)
|
| I'm very bullish on the company.
| tester756 wrote:
| Self driving cars?
|
| Don't they have specialized chips?
| mbesto wrote:
| https://blogs.nvidia.com/blog/category/auto/
|
| PS - you can literally go to https://www.nvidia.com/en-us/
| and read the menu to see what they do...
| rapsey wrote:
| Frankly gaming is still their only reliable base. Their AI lead
| is going to get competed away from every angle within 12-24
| months. The AI boom is very much like the Crypto boom.
| mbesto wrote:
| The best part about this if you have that much conviction,
| you can buy 2025 puts and make a ton of money if you're
| right. Good luck!
| claytonjy wrote:
| Which hardware/software pairing do you see dethroning NVIDIA
| that quickly?
| HDThoreaun wrote:
| Honestly I'd watch for Intel. No chance their GPUs are as
| good within a couple years, but they could easily be 80% as
| good for half the price or less. Intel is willing to lose
| money for a while on this to grow their gpu reputation. If
| theyre is able to create that product and reliably stock it
| nvdia will have to cut their prices.
|
| Nvidia effectively has no competition right now due to AMDs
| software issues. It's hard to see how that can continue
| with how big their cap is. Someone will be able to create a
| competitive product.
| 01100011 wrote:
| Gaming revenue is down. Check their earnings release.
| scottiebarnes wrote:
| What signals to you that other chip makers are capable of
| competing in learning and inference computation?
|
| We know they'll be motivated, but can they actually compete
| is the question.
| tikkun wrote:
| The Geohotz post has some good explanations for why this is
| happening.
|
| https://geohot.github.io//blog/jekyll/update/2023/05/24/the-...
|
| CUDA works, ROCm doesn't work well. Very few people want to run
| stable diffusion inference, fine tune LLaMA, train a large
| foundation model on AMD cards.
|
| OpenAI has put in some work on Triton, Modular is working on
| Mojo, and tiny corp is working on their alternative.
|
| Until some of those alternatives work as well as CUDA, people
| will mostly choose to buy Nvidia cards.
|
| The monopoly is under attack from multiple angles, but they'll be
| able to print some good cash in the (potentially long) meantime.
|
| Oh, and still significant supply shortages at many cloud
| providers. And now Nvidia's making more moves to renting GPUs
| directly. It'll be interesting to see how long it takes them to
| be able to have their supply meet demand.
| TheCaptain4815 wrote:
| I'm surprised I didn't see this frontpage HN a few days ago,
| but a very interesting read.
|
| Edit: Nevermind, found a huge thread from 2 days ago Lol.
| coolspot wrote:
| Just use hckrnews.com , it shows frontpage posts from
| previous days.
___________________________________________________________________
(page generated 2023-05-26 23:01 UTC)