[HN Gopher] What Nvidia's Blackwell efficiency gains mean for DC...
___________________________________________________________________
What Nvidia's Blackwell efficiency gains mean for DC operators
Author : rntn
Score : 34 points
Date : 2024-03-27 15:31 UTC (7 hours ago)
(HTM) web link (www.theregister.com)
(TXT) w3m dump (www.theregister.com)
| gwbas1c wrote:
| "DC" is very confusing here. "DC" normally means "Direct
| Current," not "Data Center."
|
| I recommend updating the title. (And if anyone has connections
| back to The Register, suggest that they update the original
| article too.)
| ginko wrote:
| Could also mean operators from the District of Columbia.
| astrodust wrote:
| At what point do you use the waste heat to generate electricity?
|
| 120KW is not an insignificant amount of energy.
| 2OEH8eoCRo0 wrote:
| I'm not sure it falls under cogeneration but I suppose it might
| if your waste heat can be captured and used to power the GPUs.
|
| https://www.energy.gov/eere/iedo/combined-heat-and-power-bas...
| aaronblohowiak wrote:
| I am not a physicist. In my (mis)understanding of things, the
| work you can make heat do comes from the difference in
| temperature, with a bigger difference allowing more work to be
| done efficiently. Unfortunately, the chips are kept pretty cool
| in order to stay in their operating range, so you end up with a
| large volume of warm but not hot stuff. This is hard to turn
| into electricity. It can be good for things where you want
| stuff to be warm (like heating homes, etc) but not great for
| electricity generation.
| namibj wrote:
| Yeah it's basically not worth it with silicon to recreate
| electricity. It's good for heat, though.
| ryukoposting wrote:
| Perhaps, instead of building them in deserts, we should be
| building datacenters in cold places where the ambient
| temperature inside the datacenter should be kept warmer than
| outside air.
| barryrandall wrote:
| I think there's a lot of potential in combining immersion
| cooling with utility-scale heat pumps. The heat from the
| coolant isn't high enough to produce power on its own, but
| it's constant.
| kristianp wrote:
| > saying it is 30x faster than the Hopper generation when
| inferencing a 1.8 trillion parameter mixture-of-experts model.
|
| Is that the size of gpt4? The upcoming systems seem designed to
| fit gpt4 on one system with 4bit quantisation, making gpt4
| cheaper to deliver.
___________________________________________________________________
(page generated 2024-03-27 23:02 UTC)