Subj : OpenAI's Sam Altman is dreaming of running 100 million GPUs in th To : All From : TechnologyDaily Date : Sat Jul 26 2025 13:15:09 OpenAI's Sam Altman is dreaming of running 100 million GPUs in the future - 100x more than it plans to run by December 2025 Date: Sat, 26 Jul 2025 12:03:00 +0000 Description: OpenAI is aiming for over a million GPUs by year-end, with CEO Sam Altman teasing a future 100x expansion despite mounting infrastructure and financial concerns. FULL STORY ======================================================================Sam Altman says OpenAI will soon pass 1 million GPUs and aim for 100 million more Running 100 million GPUs could cost $3 trillion and break global power infrastructure limits OpenAIs expansion into Oracle and TPU shows growing impatience with current cloud limits OpenAI says it is on track to operate over one million GPUs by the end of 2025, a figure that already places it far ahead of rivals in terms of compute resources. Yet for company CEO Sam Altman, that milestone is merely a beginning, We will cross well over 1 million GPUs brought online by the end of this year, he said . The comment, delivered with apparent levity, has nonetheless sparked serious discussion about the feasibility of deploying 100 million GPUs in the foreseeable future. A vision far beyond current scale To put this figure in perspective, Elon Musks xAI runs Grok 4 on approximately 200,000 GPUs, which means OpenAIs planned million-unit scale is already five times that number. Scaling this to 100 million, however, would involve astronomical costs, estimated at around $3 trillion, and pose major challenges in manufacturing, power consumption, and physical deployment. Very proud of the team but now they better get to work figuring out how to 100x that lol, Altman wrote. While Microsofts Azure remains OpenAIs primary cloud platform, it has also partnered with Oracle and is reportedly exploring Googles TPU accelerators. This diversification reflects an industry-wide trend, with Meta, Amazon, and Google also moving toward in-house chips and greater reliance on high-bandwidth memory. SK Hynix is one of the companies likely to benefit from this expansion - as GPU demand rises, so does demand for HBM, a key component in AI training. According to a data center industry insider, In some cases, the specifications of GPUs and HBMs...are determined by customers (like OpenAI)...configured according to customer requests. SK Hynixs performance has already seen strong growth, with forecasts suggesting a record-breaking operating profit in Q2 2025. OpenAIs collaboration with SK Group appears to be deepening. Chairman Chey Tae-won and CEO Kwak No-jung met with Altman recently, reportedly to strengthen their position in the AI infrastructure supply chain. The relationship builds on past events such as SK Telecoms AI competition with ChatGPT and participation in the MIT GenAI Impact Consortium. That said, OpenAIs rapid expansion has raised concerns about financial sustainability, with reports that SoftBank may be reconsidering its investment. If OpenAIs 100 million GPU goal materializes, it will require not just capital but major breakthroughs in compute efficiency, manufacturing capacity, and global energy infrastructure. For now, the goal seems aspirational, an audacious signal of intent rather than a practical roadmap. Via TomsHardware You might also like These are the best AMD graphics cards you can buy now Take a look at the best mini PCs we've rounded up Your Gmail inbox could soon be filled with adverts as Google tests new shoppable ad formats ====================================================================== Link to news story: https://www.techradar.com/pro/openais-sam-altman-is-dreaming-of-running-100-mi llion-gpus-in-the-future-100x-more-than-what-it-plans-to-run-by-december-2025 --- Mystic BBS v1.12 A47 (Linux/64) * Origin: tqwNet Technology News (1337:1/100) .