[HN Gopher] NSF and Nvidia award Ai2 $152M to support building a...
___________________________________________________________________
NSF and Nvidia award Ai2 $152M to support building an open AI
ecosystem
Author : _delirium
Score : 148 points
Date : 2025-08-14 13:08 UTC (9 hours ago)
(HTM) web link (allenai.org)
(TXT) w3m dump (allenai.org)
| datadrivenangel wrote:
| Suggest changing the title to:
|
| NSF and NVIDIA award Ai2 $152M to support building a fully open
| AI ecosystem
|
| To better indicate that this is not related to OpenAI and that
| the group intends to release everything needed to train their
| models.
| nativeit wrote:
| Now where have I heard this before...
| hobofan wrote:
| If Nvidia were interested in "open" AI, they would spend time to
| collaborate with AMD, etc. to build an (updated) open alternative
| to CUDA. That's probably the most closed part of the whole stack
| right now.
| bongodongobob wrote:
| That's AMDs fault, not Nvidia's.
| kookamamie wrote:
| Indeed. This is throwing pennies in virtue-signaling openness.
| sounds wrote:
| Nvidia is interested in commoditizing their complements. It's a
| business strategy to decrease the power of OpenAI (for
| instance).
|
| Nvidia dreams of a world where there are lots of "open"
| alternatives to OpenAI, like there are lots of open game
| engines and lots open software in general. All buying closed
| Nvidia chips.
| arthurcolle wrote:
| Why is OpenAI a threat to Nvidia? They are still highly
| dependent on those GPUs
| grim_io wrote:
| Google shows that Nvidia is not necessary. How long until
| more follow?
| NitpickLawyer wrote:
| Tbf, goog started a long time ago with their TPUs. And
| they've had some bumps along the way. It's not as easy as
| one might think. There are certainly efforts to produce
| alternatives, but it's not an easy task. Even the ASIC-
| like providers like cerberas and groq are having problems
| with large models. They seemed very promising with SLMs,
| but once MoEs became a thing they started to struggle.
| arthurcolle wrote:
| I agree in principle but you can't just yolo fab TPUs and
| leapfrog google
| ivape wrote:
| I don't think we can say that until we hear how Genie3
| and Veo3 were trained. My hunch is that the next-gen
| multi-modal models that combine world, video, text, and
| image models can only be trained on the best chips.
| tomrod wrote:
| Two concepts
|
| - Monopsony is the inverse of Monopoly -- one buyer.
| Walmart is often a monopsony for suppliers (exclusive or
| near exclusive).
|
| - Desire for vertical integration and value extraction,
| related to #1 but with some additional nuances
| next_xibalba wrote:
| Who is the one buyer in the Nvidia scenario? How would
| that benefit Nvidia?
| KaoruAoiShiho wrote:
| It would hurt nvidia not benefit, that's why nvidia
| spends a lot of effort to prevent that from happening,
| and it's not the case currently.
|
| They really need to avoid the situation in the console
| market, where the fact there's only 3 customers means
| almost no margins on console chips.
| next_xibalba wrote:
| Prior to the A.I. boom, Nvidia had a much, much more
| diverse customer base in terms of revenue mix. According
| to their 2015 annual report[1], their revenues were
| spread across the following revenue segments: gaming,
| automotive, enterprise, HPC and cloud, and PC and mobile
| OEMs. Gaming was the largest segment and contributed less
| than 50% of revenues. At this time, with a diverse
| customer base, their gross margins were 55.5%. (This is a
| fantastic gross margin in any industry outside software).
|
| In 2025 (fiscal year), Nvidia only reported two revenue
| segments: compute and networking ($116B revenue) and
| graphics ($14.3B revenue). Within the compute and
| networking segment, three customers represented 34% of
| _all_ revenue. Nvidia 's gross margins for fiscal 2025
| were _75%_ [2].
|
| In other words, this hypothesis doesn't fit at all. In
| this case, having more concentration in extremely deep
| pocketed customers competing over a constrained supply of
| product has caused margins to sky rocket. Moreover, GP's
| claim of monopsony doesn't make any sense. Nvidia is not
| at any risk of having a single buyer, and with the recent
| news that sales to China will be allowed, the customer
| base is going to become more diverse, creating even more
| demand for their products.
|
| [1] https://s201.q4cdn.com/141608511/files/doc_financials
| /annual...
|
| [2] https://s201.q4cdn.com/141608511/files/doc_financials
| /2025/a...
| KaoruAoiShiho wrote:
| Nobody said this was the case...
|
| The only example I used was the console market which has
| been ruined because of this issue. They generally left
| that market because it was that horrible.
| tomrod wrote:
| I'm not sure your analysis is apples to apples.
|
| Prior to the AI boom, the quality of GPUs slightly
| favored NVidia but AMD was a viable alternative. Also,
| there are scale differences between 2025 and before the
| AI boom -- simply put, there was more competition in the
| market for a smaller bucket and favorable winds on
| supplier production costs. Further, they just have better
| software tooling through CUDA.
|
| Since 2022 and the rise of multi-billion parameter
| models, NVidia's CUDA has had a lock on the business
| side, but face rising costs due to terrible trade policy
| by the US, significant rebound from COVID as well as
| geopolitical realignments, inflation on the workforce,
| and rushed/buggy power supplies as their default supply
| options have made their position quite untenable --
| mostly CUDA is their saving grace. If AMD got their
| druthers about them and focused they'd potentially unseat
| NVidia. But until ROCm is at least _easy_ nothing will
| happen there.
| patates wrote:
| Maybe if they grow too much they'd develop their own chips.
| Also if one company wins, as in they wipe out the
| competition, they'd have much less incentive to train more
| and more advanced models.
| vlovich123 wrote:
| If OpenAI becomes the only buyer, they can push around
| Nvidia and invest in alternatives to blunt their power. If
| OpenAI is one of many customers, then they're not a strong
| bargaining position and Nvidia gets to set the terms.
| victorbjorklund wrote:
| One large customer has more bargin power than many big
| ones. And risk is OpenAI would try to make their own chips
| if they capture all the market.
| someone7x wrote:
| > commoditizing their complements
|
| Feels like a modern euphemism for "subjugate their
| neighbors".
| jvanderbot wrote:
| Business has always been a civilized version of war, and
| one which will always capture us in similar ways, so I
| _guess_ wartime analogies are appropriate?
|
| Still it feels awful black and white to phrase it that way
| when this is a clear net good and better alignment of
| incentives than before.
| skybrian wrote:
| No, it's encouraging competition and cost-cutting in a part
| of the market they don't control. This can be a reason for
| companies to support open source, for example.
|
| Meanwhile, the companies running data centers will look for
| ways to support alternatives to Nvidia. That's how _they_
| keep costs down.
|
| It's a good way to play companies off each other, when it
| works.
| amelius wrote:
| But AI depends on a small number of tensor operators,
| primitives which can be relatively easily implemented by
| competitors, so compute is very close to being a commodity
| when it comes to AI.
|
| A company like Cerebras (founded in 2015) proves that this is
| true.
|
| The moat is not in computer architecture. I'd say the real
| moat is in semiconductor fabrication.
| sounds wrote:
| Have you ever tried to run a model from huggingface on an
| AMD GPU?
|
| Semiconductor fabrication is a high risk business.
|
| Nvidia invested heavily in CUDA and out-competed AMD (and
| Intel). They are working hard to keep their edge in
| developer mindshare, while chasing hardware profits at the
| same time.
| amelius wrote:
| > Have you ever tried to run a model from huggingface on
| an AMD GPU?
|
| No, but seeing how easily they run on Apple hardware, I
| don't understand your point, to be honest.
| phkahler wrote:
| >> Have you ever tried to run a model from huggingface on
| an AMD GPU?
|
| Yes. I'd never touched any of that stuff and then one day
| decided to give it a shot. Some how-to told me how to run
| something on Linux which had a choice of a few different
| LLMs. I picked one of the small ones (under 10B) and had
| it running on my AMD APU inside of 15 minutes. The
| weights were IIRC downloaded from huggingface. The
| wrapper was not. Anyway, what's the problem?
|
| BTW that convinced me that small LLMs are basically
| worthless. IMA need to go bigger next time. BTW my "old"
| 5700G has 64GB of RAM, next build I'll go at least double
| that.
| bilbo0s wrote:
| _which can be relatively easily implemented by competitors_
|
| Oh my.
|
| Please people, _try_ to think back to your engineering
| classes. Remember the project where you worked with a group
| to design a processor? I do. Worst semester of my life.
| (Screw whoever even came up with that damn real analysis
| math class.) And here 's the kicker, I know I'll be dating
| myself here, but all I had to do for my part was tape it
| out. Still sucked.
|
| Not sure I'd call the necessary processor design work here
| "relatively easy"? Even for highly experienced, extremely
| bright people, this is not "relatively easy".
|
| Far more easy to make the software a commodity. Believe me.
| amelius wrote:
| To be totally honest, the only thing I can distill from
| this is that perhaps you should have picked an education
| in CS instead of EE.
|
| I mean this is like saying that a class for building
| compilers sucked. Still, companies make compilers, and
| they aren't all >$1B companies. In fact, hobbyists make
| compilers.
| bilbo0s wrote:
| I did study CS as well.
|
| That you are comparing designing and writing a compiler
| with designing and manufacturing a neural processor is
| only testimony to the futility of my attempt to impress
| on everyone the difference. So I'll take my leave.
|
| You have a good day sir or ma'am.
| amelius wrote:
| But I'm actually saying that manufacturing is the hard
| part ...
| ants_everywhere wrote:
| > The moat is not in computer architecture. I'd say the
| real moat is in semiconductor fabrication.
|
| In the longer run, anything that is very capital intensive,
| affects entire industries, and can be improved with large
| amounts of simulation will not be a moat for long. That's
| because you can increasingly use AI to explore the design
| space.
|
| Compute not a commodity yet but may be in a few years.
| Semiconductor fab will take longer, but I wouldn't be
| surprised to see parts of the fabrication process
| democratized in a few years.
|
| Physical commodities like copper or oil can't be improved
| with simulation so they don't fall under this idea.
| PeterStuer wrote:
| I thought they assumed AI hardware would become commoditized
| sooner rather than later, and their play was to sell complete
| vertically integrated AI solution stacks, mainly a software
| and services play?
| sim7c00 wrote:
| you are not wrong. open up cuda would be a a real power move. i
| think ppl would mind a some of their other crap practices a lot
| less.
| colechristensen wrote:
| They could also publish all of their source code, die designs,
| put their patents in the public domain and go live on a beach
| somewhere and fish for a living.
|
| CUDA is what they sell, it makes more sense for them to charge
| for hardware and give the hardware-locked software away for
| free.
| latchkey wrote:
| Came here to say this. It is the basis for my entire business.
| It isn't just CUDA though, it is the hardware layer too. That
| is why we are exclusive to AMD AI hardware.
|
| We have dozens of companies building foundational models and
| they all target a single vendor to supply all the hardware?
| Make it make sense! Yes, I know models run on AMD too, but the
| fact is, Nvidia, who's clearly doing a great job, is a literal
| monopoly. We need viable alternatives.
|
| This deal was done with CirraScale, who are great people. It is
| important to point out that they are also one of the 13
| official AMD Cloud Partners. I'm on the list too.
| emsign wrote:
| That's because nvidia is in the business of selling chips and
| compute time. All they care bout is that as much people on
| Earth become dependend on AI running on nvidia hw as possible.
| brunohaid wrote:
| Maybe that'll help them hire someone who can at least respond to
| S2 API key requests...
|
| Being open is great, but if over the course of 6 months 3
| different entities (including 2 well known universities) apply
| and send more than a dozen follow ups to 3 different "Reach out
| to us!" emails with exactly 0 response, the "open" starts
| sounding like it's coming from Facebook.
| zoobab wrote:
| "Open" like an open source FPGA implementation of their chips?
| pmdr wrote:
| So basically nvidia handing out cash to itself. <insert Obama
| medal meme>
| lgats wrote:
| or handing out free hardware? with the side effect of investing
| in open source llms in their ecosystem
| jeffreysmith wrote:
| Not sure what's with the HN tone on this announcement. AI2 are
| really some of the best people around for creating truly open
| artifacts for the whole ecosystem. Their work on OLMo and Molmo
| is some of the most transparent and educational material you can
| find on model building. This is just great news for everyone.
| Guthur wrote:
| Maybe because many of us are not from the US. The stated goal
| is US dominance of the AI field, and sorry if the rest of us
| don't see that as a good thing nor particularly open.
| FirmwareBurner wrote:
| _> The stated goal is US dominance of the AI field_
|
| Any country tries to dominate any field if they can do it,
| it's just human nature. Why is that a bad thing?
|
| That constant competition for superiority between nations is
| how humanity has evolved from hunter gatherer to having
| tractors, microwave ovens, airplanes, internet and
| penicillin.
| byteknight wrote:
| As an American, I obviously can get behind it, but I can
| easily see how a declared goal of superiority of others
| would rub those others the wrong way (and possibly prevent
| their contribution)
|
| [Insert xkcd new standard image here]
| FirmwareBurner wrote:
| _> a declared goal of superiority of others would rub
| those others the wrong way_
|
| So what? Does that change anything in how things work in
| reality? Everyone knows it, so why pussyfoot around it?.
|
| Why are people nowadays so sensitive about saying the
| truth of how things work? Have people been coddled that
| much that they've can't handle reality? A good life
| lesson is that the world does not revolve around your
| feelings.
| Guthur wrote:
| It's not my feelings mate, if you don't live outside the
| US and have not been subjected to their unipolar attitude
| you will probably never understand and there is literally
| nothing I'm going to say to convince you of the objective
| reality the rest of us face.
| FirmwareBurner wrote:
| Sorry, I wasn't talking about you specifically, but the
| general "you" as in you the reader.
| laughingcurve wrote:
| As an American researcher, I can assure you that the
| Chinese superiority and behavior in the field is
| certainly ENCOURAGING my contributions.
| Herring wrote:
| Yes, _competition_ is good. Monopoly is bad. A more
| distributed power structure is much better for overall
| progress, and even for the monopolist in the long run (Ex:
| Intel).
| FirmwareBurner wrote:
| _> Monopoly is bad. _
|
| So what do you propose? Should the US stop development
| till other countries catch up?
| Herring wrote:
| Nah, I'd say just do more anti-monopoly anti-inequality
| work. Probably start internally, that's a massive enough
| task on its own (eg breaking up big tech). Assist other
| countries eg with aid if (and only if) they are doing the
| same. This is a big topic, ask your favorite frontier LLM
| about it.
| FirmwareBurner wrote:
| Unless China does the same that's an unrealistic ask.
| That would be like doing nuclear disarmament but only you
| and everyone else gets to keep their nukes.
| Herring wrote:
| Your "nukes" are leaking into the water supply.
| Inequality shows up a million different ways that
| Americans don't fully understand yet, eg inflation
| (dominant companies increasing profits), teacher
| shortages (low wages), student debt (not an issue for the
| wealthy so why fix it), housing prices (corporate
| landlords, exclusionary zoning), layoffs (despite record
| profits) etc etc. This situation (Trump/Musk/Bezos taking
| most of the gains) is just not long-term stable, and if
| any other country wants to do the same to themselves let
| them. The longer it goes the harder it will be to fix.
|
| Again, go have this discussion with the LLM you trust,
| it's much more informative.
| philipkglass wrote:
| The Allen Institute for Artificial Intelligence projects so
| far have been very open. They are open about the trained
| models, the inference code, the training data sets, and the
| training code. A research group from any country can pick up
| where AI2 left off if they want to try a different approach
| or extension. I want to live in a world where there are many
| models near the top of leader boards, from many different
| research groups and countries, and I think that AI2 helps
| enable that.
|
| The stated "US dominance" goal just pays lip service to what
| appeals to the funders, kind of like how supercomputing
| projects traditionally claim that they contribute to curing
| disease or producing clean energy. (Even if it's something
| far removed from concrete applications, like high fidelity
| numerical simulations of aqueous solutions.)
| laughingcurve wrote:
| Good luck trying to raise money from a NATIONAL science
| foundation without it being in the NATIONAL interest.
| Guthur wrote:
| Of course you can justify this, as people have, but you can't
| then blame the rest of us non US citizens for not aligning
| with that goal. The US is only a small portion of the global
| population and the government itself has a long history of
| stamping on the rest of us.
| insane_dreamer wrote:
| But better for the rest of the world than private US tech
| companies dominating.
| nativeit wrote:
| This just in: AI2 pivoting to a for-profit model, and is
| seeking venture capital funding.
|
| Oops, sorry that's next year's news. Anyway, this is all
| ringing very familiar.
| khalic wrote:
| People seem to be missing the fact that Ai2 is an initiative by
| the Allen Institute for AI, not a company
| NitpickLawyer wrote:
| And they've already released open source models, with data and
| training code. They're definitely the good guys here.
| cruffle_duffle wrote:
| I can't wait until I can run this shit locally without spending
| $10,000 on clusters of GPU's. The models will be trained using
| some distributed P2P-like technology that "The Man" can't stop.
|
| Imagine running a model trained by the degenerates on 4chan.
| Models that encourage you to cheat on your homework, happily
| crank out photos of world leaders banging farm animals, and
| gleefully generate the worst gore imaginable. And best of all,
| any junior high schooler on the planet can do it.
|
| That's how you know this technology has arrived. None of this
| sanitized, lawyer approved, safety weenie certified, three letter
| agency authorized nonsense we have now. Until it's all local,
| it's just an extension of the existing regime.
| philipkglass wrote:
| _Imagine running a model trained by the degenerates on 4chan._
|
| It has been done before:
|
| https://en.wikipedia.org/wiki/GPT4-Chan
|
| https://archive.org/details/gpt4chan_model_float16
| latchkey wrote:
| Only $10k? We spend a lot more than that! =)
| yalogin wrote:
| After reading through the article I couldn't understand what an
| open AI ecosystem is. Are they talking about hardware or
| software? If it's software we have opensource models, are they
| going to create open source vertical integrations?
| curious_cat_163 wrote:
| We don't really have very many open source models. We have
| "open weights" models. Ai2 is one of the very few labs that
| actually make their entire training/inference code AND datasets
| AND training run details public. So, that this investment is
| happening is a welcome step.
|
| Congratulations to the team at Ai2!
| yalogin wrote:
| Ah thanks for the nuance!
| big_toast wrote:
| Some people will be too young to know the commoditize[1] your
| complements wisdom of yesteryear. It's hard for me to tell if
| consumers end up net ahead after things settle.
|
| I'm surprised we haven't heard about OpenAI pushing a facebook
| style OpenCompute project or ARM (acorn, apple, VLSI) or similar
| for the stack below them.
|
| [1]: https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/
| gigatexal wrote:
| I guess that makes sense: make Nvidia hardware run stuff so well
| you buy more of it. That's all CUDA is it's a way to get people
| to buy more CUDA capable hardware aka Nvidia cards.
| nativeit wrote:
| Are we sure this isn't Nvidia's play to make what OpenAI is for
| Microsoft? Start with a bunch of non-profit, research-for-the-
| good-of-humanity promises before closing ranks and soliciting
| venture capital for its for-profit subsidiary?
|
| I don't want to be cynical, it's just that the world has left
| very few options otherwise.
___________________________________________________________________
(page generated 2025-08-14 23:01 UTC)