[HN Gopher] Superconducting Microprocessors? Turns Out They're U...
___________________________________________________________________
Superconducting Microprocessors? Turns Out They're Ultra-Efficient
Author : f00zz
Score : 212 points
Date : 2021-01-13 17:21 UTC (5 hours ago)
(HTM) web link (spectrum.ieee.org)
(TXT) w3m dump (spectrum.ieee.org)
| Symmetry wrote:
| Hmm, I wonder what the feature size is and whether they'd have a
| good story about storage at commensurate low power usage?
| px43 wrote:
| It'll be interesting to see if the cryptocurrency mining industry
| will help subsidize this work, since their primary edge is
| power/performance.
|
| During stable price periods, the power/performance of
| cryptocurrency miners runs right up to the edge of profitability,
| so someone who can come in at 20% under that would have a
| _SIGNIFICANT_ advantage.
| wmf wrote:
| _In this paper, we study the use of superconducting technology
| to build an accelerator for SHA-256 engines commonly used in
| Bitcoin mining applications. We show that merely porting
| existing CMOS-based accelerator to superconducting technology
| provides 10.6X improvement in energy efficiency._
| https://arxiv.org/abs/1902.04641
| reasonabl_human wrote:
| Looks like the admit it's not scalable and only applies to
| workloads that are compute heavy, but a 46x increase over
| cmos when redesigning with an eye for superconducting env
| optimizations
| Badfood wrote:
| Cost / hash. If power is free they don't care about power
| adolph wrote:
| / time. Time is immutable cost.
| agumonkey wrote:
| If something like that happens it will have far reaching
| consequences IMO. I'm not pro blockchain.. but the energy cost
| is important and it goes away significantly people will just
| pile 10x harder on it.
| inglor_cz wrote:
| Nice, but requires 10 K temperature - not very practical.
|
| Once this can be done at the temperature of liquid nitrogen, that
| will be a true revolution. The difference in cost of producing
| liquid nitrogen and liquid helium is enormous.
|
| Alternatively, such servers could be theoretically stored in the
| permanently shaded craters of the lunar South Pole, but at the
| cost of massive ping.
| gnulinux wrote:
| If the throughput is fast enough 3+3=6 seconds latency doesn't
| really sound _that_ bad. There are websites with that kind of
| lag. You can 't use to build a chat app, but you can use it as
| a cloud for general computing.
| mindvirus wrote:
| Fun aside I learned about recently: we don't actually know if
| the speed of light is the same in all directions. So it could
| be 5+1=6 seconds or some other split.
|
| https://en.m.wikipedia.org/wiki/One-way_speed_of_light
| faeyanpiraat wrote:
| There is a Veritasium video really fun to watch which
| exmplains with examples why you cannot measure one way
| speed of light: https://www.youtube.com/watch?v=pTn6Ewhb27k
| inglor_cz wrote:
| Yes, for general computing, that would be feasible.
| reasonabl_human wrote:
| I wouldn't want to be on call when something breaks on the
| moon....
|
| Astronaut DRIs?
| inglor_cz wrote:
| "Oh no, we bricked a lunar computer! Go grab your pressure
| suit, Mike! Back in a week, darling... Tell your mother I
| won't be attending her birthday party."
| juancampa wrote:
| > The difference in cost of producing liquid nitrogen and
| liquid helium is enormous.
|
| Quick google search yields: $3.50 for 1L of He vs $0.30 for 1L
| of H2. So roughly 10 times more expensive.
| inglor_cz wrote:
| Nitrogen is N2, though. Liquid nitrogen is cheaper than
| liquid hydrogen.
|
| I was able to find "1 gallon of liquid nitrogen costs just
| $0.5 in the storage tank". That would be about $0.12 per 1L
| of N2.
| monopoledance wrote:
| Edit: Didn't read the OP carefully... Am idiot. Anyway,
| maybe someone reads something new to them.
|
| Nitrogen won't get you below 10degK, tho. It's solid below
| 63degK (-210degC).
|
| You know things are getting expensive, when superconductors
| are rated "high temperature", when they can be cooled with
| LN2...
|
| Helium (He2) is _practically finite, as we can't get it
| from the atmosphere in significant amounts (I think fusion
| reactor may be a source in the future), and it's critically
| important for medical imaging (each MRI 35k$/year) and
| research. You also can really store it long term, which
| means there are limits to retrieval/recycling, too. I
| sincerely hope we won't start blowing it away for porn and
| Instagram.
| tedsanders wrote:
| That price is more than a decade out of date. Helium has been
| about 10x that the past half decade. I used to pay about
| $3,000 per 100L dewar a few years ago. Sounds like that price
| was still common in 2020: https://physicstoday.scitation.org/
| do/10.1063/PT.6.2.2020060...
|
| Plus, liquid helium is produced as a byproduct of some
| natural gas extraction. If you needed volumes beyond that
| production, which seems likely if you wanted to switch the
| world's data centers to it, you'd be stuck condensing it from
| the atmosphere, which is far more expensive than collecting
| it from natural gas. I haven't done the math. I'm curious if
| someone else has.
| mdturnerphys wrote:
| That's the cost for one-time use of the helium. If you're
| running a liquefier the cost is much lower, since you're
| recycling the helium, but it still "costs" ~400W to cool 1W
| at 4K.
| faeyanpiraat wrote:
| I'm no physicist, but wouldn't you need some kind of medium to
| efficiently transfer the heat away?
|
| On the moon you have no atmosphere to do it with radiators with
| fans, so I gues you would have to make huge radiators which
| simply emit the heat away as infrared radiation?
| rini17 wrote:
| Doubt if 80x difference would make it attractive. If it were
| 8000x then maybe.
|
| And that only if you use the soil for cooling, which is non-
| renewable resource. If you use radiators, then you can put them
| on a satellite instead with much lower ping.
| zelienople wrote:
| The other strategy that is ultra-efficient is to stop using the
| net to sell hoards of useless crap that will break the day after
| the warranty expires and cannot be repaired.
|
| That would save money on the computing power as well as the
| mining, transportation of raw materials, refining, transportation
| of refined materials, manufacturing, transportation of finished
| goods and the whole retail chain.
|
| Unless we achieve room-temperature semiconducting processors,
| this will only benefit data centres, most of whose power is used
| to sell stuff. Does anyone actually think that the savings will
| be passed on to the consumer or that business won't immediately
| eat up the savings by using eighty times more processing power?
|
| Hey, now we can do eighty times more marketing for the same
| price!
| jkaptur wrote:
| Couldn't you say this about virtually any hardware improvement?
| the8472 wrote:
| > We use a logic primitive called the adiabatic quantum-flux-
| parametron (AQFP), which has a switching energy of 1.4 zJ per JJ
| when driven by a four-phase 5-GHz sinusoidal ac-clockat 4.2 K.
|
| The landauer limit at 4.2K is 4.019x10^-23 J (joules). So this is
| only a factor of 38x away from the landauer limit.
| dcposch wrote:
| > adiabatic quantum-flux-parametron
|
| https://youtube.com/watch?v=BBqIlBs51M8
| pgt wrote:
| I'm curious about how the Landauer limit relates to
| Bremermann's Limit:
| https://en.wikipedia.org/wiki/Bremermann%27s_limit
|
| Admittedely, I haven't done much reading, but I see it is a
| linked page from Bremermann's Limit:
| https://en.wikipedia.org/wiki/Landauer%27s_principle
| freeqaz wrote:
| Mind expanding on this a bit more? What is that that limit and
| how does it relate to the clock speed?
| the8472 wrote:
| https://en.wikipedia.org/wiki/Landauer%27s_principle
|
| Note that the gates themselves used here are reversible, so
| the limit shouldn't apply to them. But the circuits built
| from them them aren't reversible as far as I can see in the
| paper, so it would still apply to the overall computation.
| Enginerrrd wrote:
| It's not about clock speed per se. It's about the lowest
| possible energy expenditure to erase one bit of information
| (or irreversibly destroy it by performing a logical
| operation). The principle comes about from reasoning about
| entropy loss in said situations. There's a hypothesized
| fundamental connection between information and entropy
| manifest in physical law. The idea is that if you destroy one
| possible state of a system, you have reduced the entropy of
| that system, so the 2nd law of thermodynamics implies that
| you must increase the entropy of the universe somewhere else
| by at least that amount. This can be used to say how much
| energy the process must take as soon as you choose a
| particular temperature.
|
| This applies to any irreversible computation.
|
| IMO, The fact that it's only 38x the minimum is MIND BLOWING.
| jcims wrote:
| Is there an idea of entropic potential
| energy/gradient/pressure? Could you differentiate encrypted
| data from noise by testing how much energy it requires to
| flip a bit?
| kmeisthax wrote:
| Only if you were measuring a system with access to the
| key material and doing something with the plaintext, in
| which case this would be a side-channel attack (and an
| already-studied one). The whole point of encryption is
| that the data output is indistinguishable from noise
| without knowing the key.
| Raidion wrote:
| No, because the energy of the system isn't related to the
| order of the underlying data, it's related to the changes
| that happen to the underlying data. If you have 5 bits
| and flip 3, it takes the same energy regardless of if the
| 5 bits have meaning or not. This is speaking in terms of
| physics. There obviously could be some sort of practical
| side channel attack based on error checking times if this
| was an actual processor.
| ajuc wrote:
| > IMO, The fact that it's only 38x the minimum is MIND
| BLOWING.
|
| It's like if someone made a car that drives at 1/38th the
| light speed.
| yazaddaruvala wrote:
| For anyone too lazy to math, its a car that can go:
|
| 28.4 million km per hour (i.e. 17.6 million miles per
| hour)
|
| I wonder how much that speeding ticket would cost.
|
| Disclaimer: Assuming the one-way speed of light is 300k
| km/s
| rbanffy wrote:
| > I wonder how much that speeding ticket would cost.
|
| I once got one for going at about 3x the speed limit
| (very nice road in Brazil, broad daylight, nobody in
| sight, short run, and very unfortunate radar gun
| positioning). The policeman was impressed and joked that
| he would like to, but couldn't give me 3 speeding tickets
| instead of one.
| faeyanpiraat wrote:
| This comment took effort and adds to the discussion;
| what's with the downvotes?
| TheRealNGenius wrote:
| No need to assume when we can define ;p
| bananabreakfast wrote:
| I think they were alluding to the fact that it is
| impossible to measure the one-way speed of light, and
| even the definition is an assumption based on the two-way
| speed
| wmf wrote:
| Interesting tradeoff:
|
| _AQFP logic operates adiabatically which limits the clock rate
| to around 10 GHz in order to remain in the adiabatic regime. The
| SFQ logic families are non-adiabatic, which means they are
| capable of running at extremely fast clock rates as high as 770
| GHz at the cost of much higher switching energy._
| jcfrei wrote:
| Does superconductivity remove or reduce the limit on die size?
| undersuit wrote:
| I'm making a naive guess here. No, superconducting transistors
| are probably harder to create than non super conducting
| transistors so the limits on die size from defects are even
| more pronounced and superconducting doesn't change the speed of
| light for electrons on the chip so it doesn't change the timing
| issues arising from large dies.
| akiselev wrote:
| The limits on die size for the competitive consumer chip
| market are nothing like that of the B2B market. Largest chip
| ever made was over 40,000 mm^2 [1] compared to Intel's 10900K
| at ~205 mm^2. In production mainframe chips like IBM's Z15s
| are on the order of 700mm^2. The fab process has a lot of
| levers so very low defect rates are possible but not at the
| scale of a consumer CPU.
|
| [1] https://techcrunch.com/2019/08/19/the-five-technical-
| challen...
|
| Edit: I assume a supercoducting microprocessor would use a
| strategy similar to the AI monolith in [1]. Just fuse off and
| route around errors on a contiguous wafer and distribute the
| computation to exploit the dark zones for heat dissipation.
| qayxc wrote:
| If implemented using switches based on the Josephson effect
| like here, then no.
|
| The thickness of the required insulating barrier presents a
| hard lower limit to the structure size.
|
| The actual value of that limit depends on the material used and
| the particular implementation of the Josephson junction, of
| which there seems to be quite a few.
|
| So the limit depends on how thin the barrier can be made.
| sliken wrote:
| Can't imagine why it would, but the lack of heat makes a 3D cpu
| much more feasible. So you could take 20 die, make 20 layers,
| and get radically more transistors per volume.
| fastball wrote:
| Seems like it would be much harder to keep a stacked CPU
| superconducting as heat dissipation would be more difficult.
| emayljames wrote:
| Could a design with gaps between layers, but still
| connected as one not mitigate that though?.
| gibbonsrcool wrote:
| Maybe I'm misunderstanding but since this 3D CPU would be
| superconducting, it would conduct electricity without
| resistance and therefore not generate any heat while in
| use.
| fastball wrote:
| The adiabatic (reversible) computations themselves would
| be zero-loss, but in order to actually read the result of
| your computation you need to waste heat.
| faeyanpiraat wrote:
| Maybe there are parts which are not superconducting? Like
| impurities in the material. So even though the generated
| heat is like 0.01% of the original, some heat is still
| generated.
| sliken wrote:
| Presumably 80x less power including cooling means more than
| 80x less power not including cooling.
|
| I'd think that should be enough to get quite a few layers,
| sure maybe some minimal space between layers for cooling,
| but radically less than the non-superconducting designs.
| laurent92 wrote:
| This could explain the extreme push to the Cloud, for example
| with Atlassian who discontinues its Server products entirely, and
| only keeps Data Center or Cloud versions. It behaves as if
| personal computing or server rooms in companies won't be a thing
| in 2025.
| hikerclimb wrote:
| Hopefully this doesn't work
| nicoburns wrote:
| Huh, this seemed a bit too good to be true on first reading. But
| given that the limits on computing power tend to thermal, and
| that a superconducting computer presumably wouldn't produce any
| heat at all, it does kind of make sense.
| ska wrote:
| The system as a whole will produce heat, but less.
| nicoburns wrote:
| True, but usually the problem is removing the heat from the
| chip, not the total amount of heat produced. If the heat is
| mostly produced by the cooling system then that problem all
| but goes away.
| ska wrote:
| Not really at data centre scale. Heat directly in the CPU
| is a limiting factor on how fast an individual chip can go,
| and at the board level is an issue of getting heat away
| from the CPU somehow.
|
| But that heat has to go somewhere. When you have rooms full
| of them the power and cooling issues become key in a way
| that doesn't matter when it's just a PC in your room.
| mensetmanusman wrote:
| Any entropy reduction activity in one area automatically
| means a lot more heat is added somewhere else :) (2nd law)
| klysm wrote:
| Until you account for the energy required to get it that cold
| thatcherc wrote:
| Subtitle from the link:
|
| > The 2.5 GHz prototype uses 80 times less energy than its
| semiconductor counterpart, _even accounting for cooling_
|
| (emphasis mine)
| amelius wrote:
| Yeah, but only at datacenter scales.
|
| > Since the MANA microprocessor requires liquid helium-level
| temperatures, it's better suited for large-scale computing
| infrastructures like data centers and supercomputers, where
| cryogenic cooling systems could be used.
| ernst_klim wrote:
| So great for big computing centres and such? CERN and other
| should be happy with it.
| lallysingh wrote:
| DC scale is great. Most stuff runs in a DC. What's wrong
| about requiring DC scale?
| alkylketone wrote:
| I'd be curious what the energy savings are like at smaller
| scales -- 80x at data center scales, but how about for a
| smaller machine, like a PC with phase cooling?
| nullc wrote:
| I would expect the cooling to have a scaling advantage--
| heat gain is proportional to surface area, but the mount
| of superconducting mojo you can use is proportional to
| volume so it should be more energy efficient to build
| larger devices.
| thatcherc wrote:
| The authors' power comparison is outlined in Section VI
| of their paper [0] (page 11-12). You might be able to
| figure out some intermediate scalings from that!
|
| [0] - https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arn
| umber=929...
| lallysingh wrote:
| This sounds perfect for space. It's cold in space, power is at a
| premium, and it can be tough getting rid of heat.
| vincnetas wrote:
| Its a bit cold in space. But actually its really difficult to
| cool down things in space as there is nothing around you to
| transfer heat to.
| TOMDM wrote:
| I'm ignorant of the specific physics, but what if you got it
| cold before you sent it into a vacuum, and then shielded it
| from external radiation?
|
| Surely due to the massively reduced power usage it would be
| easier to keep it cool relative to traditional compute
| fastball wrote:
| The problem is that the act of computation is generating
| heat, so you can't just insulate your superconducting CPU,
| you need a way to dissipate the heat it _must_ generate
| (this applies to all irreversible computations). This is
| difficult, because the only way to dissipate heat in space
| is radiation (with conduction usually acting as a
| middleman), which is constrained by the surface area of
| your radiator.
|
| So no, it probably wouldn't be any easier in space.
| akiselev wrote:
| Ideally this server farm would attach to a giant rock
| somewhere in a stable orbit and drill heat exchange pipes
| deep into the rock, like how we do for geothermal energy
| here on earth but in reverse. This whole exercise would
| require a nuclear reactor to be feasible both
| economically and engineering wise.
| cbozeman wrote:
| Mass Effect taught me this.
|
| I miss old school BioWare.
| fastball wrote:
| Was this a plot point in ME? I don't remember that.
| cbozeman wrote:
| It was one of the "scan a planet" type side mission
| things.
|
| I just remember that that particular planet didn't have
| much of an atmosphere, so it wasn't good for dumping
| excess heat from ships.
| rytill wrote:
| I have heard this elsewhere, but when I looked into it, it
| seems like it's not that big of a challenge compared to the
| other difficulties of space. More of just a consideration
| that has to be part of the design. See:
| https://en.m.wikipedia.org/wiki/Spacecraft_thermal_control
|
| Can anyone give an expert opinion on the difficulty of
| cooling in space?
| benibela wrote:
| In Sundiver by David Brin they put all the heat in a laser,
| so they can beam it away from their craft
| jetrink wrote:
| Is it possible to do this without violating the second law
| of thermodynamics?
| titanomachy wrote:
| Definitely possible to put _some_ of the heat into a
| laser. It 's simple to turn a temperature gradient into
| an electrical potential [0], and if you use that
| electricity to power a laser it will convert away some of
| the hat.
|
| [0] https://en.wikipedia.org/wiki/Thermocouple
| m4rtink wrote:
| AFAIK it is not and IIRC the author even mentioned it
| somewhere afterwards, possibly in the next book.
| reasonabl_human wrote:
| I can't think of a reason it would violate any basic
| physical laws. Use a peltier cooler in reverse as the
| transducer from heat to electricity, apply to an
| appropriately spec'd solid state laser. Surely the devil
| is in the details somewhere..
| vincnetas wrote:
| Peltier generator would need somwhere to transfer the
| heat to. But there is nothing around.
| superkuh wrote:
| Sure. But how efficient are they once you include the power used
| to keep them cold enough to superconduct? I doubt that they're
| even as efficient as a normal microprocessor would be.
| SirYandi wrote:
| "But even when taking this cooling overhead into account," says
| Ayala, "The AQFP is still about 80 times more energy-efficient
| when compared to the state-of-the-art semiconductor electronic
| device, [such as] 7-nm FinFET, available today."
| b0rsuk wrote:
| Given the cooling requirements, I suppose it would create
| completely impassable rift between datacenter computing and other
| kinds. Imagine how programming and operating systems might look
| in a world where processing power is 80x cheaper.
|
| Considering that "data centers alone consume 2% of world's
| enegy", I think it's worth it.
| hacknat wrote:
| > Imagine how programming and operating systems might look in a
| world where processing power is 80x cheaper.
|
| Just wait 10 years?
| Jweb_Guru wrote:
| Not sure if you noticed, but Moore's Law died quite awhile
| ago now.
| mcosta wrote:
| For single thread performance.
| anfilt wrote:
| Moore's laws has nothing to do with how fast a chip is.
| It deals with how many transistors you can fit in a given
| area.
|
| This can equate to a faster chip because you now can do
| more at once. However, we hit the frequency limits a
| while ago for silicon. Particularly, parasitic
| capacitance is a huge limiting factor. A capacitor will
| start to act like a short circuit the faster your clock
| is.
|
| Moore's law has a little more life, although the rate
| seems to have slowed. However, at the end of the day it
| can't go one forever you can only make something so
| small. One gets to a point they have so few atoms to
| constructing something useful becomes impossible. Like
| current transistors are finFETs because the third
| dimension gives them more atoms to reduce leakage
| current, compared to the relatively planar designs on
| older process nodes. However, these finFets still take up
| less area on a die.
| xwdv wrote:
| As processing power cheapens, programmers will settle for lazy
| and inefficient code for everyday consumer applications. It
| will be easier to be a programmer, because you can get away
| with writing shitty code. So wages will fall and the prestige
| of being a software developer wanes. The jobs requiring truly
| elite skill and understanding will dwindle and face fierce
| competition for their high pay.
|
| Before this happens, I recommend having your exit strategy for
| the industry, living off whatever profits you made working as a
| developer during the early 21st century.
| gameswithgo wrote:
| did you write this in 1980?
| no_flags wrote:
| Processing power has cheapened exponentially for the last 50
| years (Moore's law). I am skeptical that a continuation of
| this trend will drive a fall in wages.
|
| In my own experience performance optimization is an important
| but infrequent part of my job. There are other skills an
| elite programmer brings to the table like the ability to
| build a mental model of a complex system and reason about it.
| If downward pressure on wages occurs I think it will be for
| another reason.
| Enginerrrd wrote:
| I think in general you are right, however, there will
| certainly be sectors where it will be a lot easier to just
| throw it at a wall of computation than pay someone to think
| about it.
|
| But architecting complex systems so that they are
| maintainable, scalable, and adaptable... there's not gonna
| be enough cheap computation to solve that problem and omit
| top talent for a long time.
| Fronzie wrote:
| Energy cost and thermal cooling place restrictions on the
| computations. C++, with all it flaws, stays in use because it
| does give control over performance trade-offs.
| vlovich123 wrote:
| I'm less pessimistic. Even if CPUs are 10x faster than they
| are, that still opens up more opportunities than what can be
| "absorbed" by less efficient coding/masses. There will always
| be demands for eking out more of the available
| processing/compute power and doing so will always be a
| difficult task. For example, today you can edit large scale
| videos in real-time and view various movie-quality SFX
| applied real-time on a consumer desktop. More computing power
| = more ability to do things cheaply that were in
| feasible/impossible before. You're limited by your
| imagination more than anything.
|
| What's truly more of a threat is AI-aided programming if that
| ever becomes a thing. Again, I'm not worried. The gap between
| telling an AI "do something that makes me $1 billion dollars"
| and "write a function that has properties x/y/s" or "do this
| large refactor for me and we'll work together on any
| ambiguous cases", is enormous. So you'll always have a job -
| you'll just be able to do things you couldn't in the past
| (it's questionable whether an AI can be built that generates
| programs from vague/poorly defined specs from product or even
| that generates those specs in the first place.
|
| As an obvious counter example to your theory, we have CPUs
| that are probably 10000x more powerful than in 1980 (actually
| more if you consider they have processing technologies that
| didn't even exist back then like GPUs and SIMD). The software
| industry is far larger and devs make more individually.
|
| Technically SIMDs and GPUs existed back then but in a much
| more immature form, being more powerful, cheaper and
| widespread today than what was available in the 80s.
| layoutIfNeeded wrote:
| >Imagine how programming and operating systems might look in a
| world where processing power is 80x cheaper.
|
| So like 2009 compared to 2021? Based on that, I'd say even more
| inefficient webshit.
| Jweb_Guru wrote:
| Processing power is not 80x cheaper now than it was in 2009
| unless you can do all your computation on a GPU.
| systemvoltage wrote:
| Javascript emulator in Javascript!
| lkbm wrote:
| Gary Bernhardt's presentation on the "history" of
| Javascript from 1995 to 2035 is hilarious and seems like
| something you'd enjoy:
| https://www.destroyallsoftware.com/talks/the-birth-and-
| death...
|
| It takes things way beyond simply "emulating Javascript in
| Javascript", yet is presented so well that you barely
| notice the transition from current (2014) reality to a
| comically absurd future.
| dmingod666 wrote:
| Do you mean the "eval()" function?
| jnsie wrote:
| Anyone remember Java 3D? 'cause I'm imagining Java 3D!
| vidanay wrote:
| Java 11D(tm) It goes to eleven!
| sago wrote:
| I don't understand your reference. It seems negative, but
| it's hard imho to go down on the success of Minecraft. Or
| am I misunderstanding you?
| zinekeller wrote:
| Minecraft (obviously the Java edition) actually uses some
| native libraries (LWJGL) so I don't know if Minecraft is
| a good comparison.
| AnIdiotOnTheNet wrote:
| Considering that many modern interfaces are somehow less
| responsive than ones written over 20 years ago _even when
| running those programs on period hardware_ , I feel certain
| that you are right.
| xxpor wrote:
| But it probably took 80x less time to develop said
| software.
| layoutIfNeeded wrote:
| I'd doubt that.
| api wrote:
| I don't see an impassible rift. Probably at first, but
| supercooling something very small is something that could
| certainly be productized if there is demand for it.
|
| I can see demand in areas like graphics. Imagine real-time
| raytracing at 8K at 100+ FPS with <10ms latency.
| adamredwoods wrote:
| Cyptocurrency demands.
| twobitshifter wrote:
| Jevons Paradox would predict that we'll end up using even more
| energy on computing.
| ffhhj wrote:
| > Imagine how programming and operating systems might look in a
| world where processing power is 80x cheaper.
|
| UI's will have physically based rendering and interaction.
| mailslot wrote:
| It'll all be wasted. When gasoline prices plummet, everyone
| buys 8mpg SUVs. If power & performance gets cheaper, it'll be
| wasted. Blockchain in your refrigerator.
| whatshisface wrote:
| Solid state physics begets both cryogenic technology and
| cryocooling technology. I wouldn't write off the possibility of
| making an extremely small cryocooler quite yet. Maybe a pile of
| solid state heat pumps could do it.
| jessriedel wrote:
| This is true, but the fact that heat absorption scales with
| the surface area is pretty brutal for tiny cooled objects.
| Calloutman wrote:
| Not really. You just have the whole package instead a
| vacuum casing.
| jessriedel wrote:
| Really. Vacuum casing is not even close to sufficient to
| set heat absorption to zero because of thermal radiation.
|
| And you can't just make the walls reflective once the
| cold object gets smaller than the wavelength of the
| radiation. The colder the object, the longer that
| wavelength.
| Calloutman wrote:
| The way it works is that the entire assembly is in a
| vacuum. It kinda has to be as any gas which touches it
| will instantly condense to it or freeze to it. You then
| have a dual cryostat of liquid helium and liquid nitrogen
| cooling down the assembly (within the vacuum). The helium
| and nitrogen cryostat also have a vacuum shield. The
| nitrogen (liquid at 77K) is a sacraficial coolant which
| is far cheaper than liquid helium (liquid at 4K) that you
| need to get to these temperatures. Your're right that
| thermal radiation is an issue so you have to be careful
| with the placement of any windows or mirrors around the
| device.
|
| Souce. I have a PhD in physics where I used equipment
| cooled to 4K.
| jessriedel wrote:
| Great, then we both have physics PhDs, and you'll know
| that none of that equipment has, or easily could be,
| sufficiently miniaturized, which is the topic of
| discussion ("extremely small cryocooler"). You can't put
| nested closed dewers of liquid nitrogen and helium on a
| O(1 mm^2) microchip, and the reason is exactly what I
| said: it will warm up too fast.
| extropy wrote:
| What's wrong with attaching said microchip to a piece of
| copper for increased size? Genuinely curious.
|
| To be useful in a data center you could cool a slab of
| copper the size of a fridge and surface mount thousands
| of chips on it.
| jessriedel wrote:
| The topic is cooling small objects so that personal
| electronics (e.g., your phone) can compete with
| datacenters. Cold at scale (i.e., in datacenters) is
| comparatively easy.
| Calloutman wrote:
| Ah, you're totally right. I misread the OP. Sorry.
| jessriedel wrote:
| No problem :)
| whatshisface wrote:
| Don't forget about thermal radiation.
| whatshisface wrote:
| Heat conduction also scales with thermal conductivity,
| which is another thing that advances in solid state can
| bring us.
| jessriedel wrote:
| This doesn't change the fact that, for any degree of heat
| conductivity achieved smaller packages will be hard to
| keep cold than large ones.
| whatshisface wrote:
| It also doesn't change the fact that smaller devices are
| harder to put wires on - but they're both polynomial
| scaling factors that other polynomial scaling paradigms
| could cancel out.
| jessriedel wrote:
| The topic of discussion is datacenter vs. an extremely
| small cryocooler. What is the other polynomial scaling
| paradigm that would cancel out the datacenter's
| advantage?
| xvedejas wrote:
| It seems likely that the more efficient our processors become,
| the larger share of the world's energy we'll devote to them
| [0]. Not that that's necessarily a bad thing, if we're getting
| more than proportionally more utility out of the processors,
| but I worry about that too [1].
|
| [0] https://en.wikipedia.org/wiki/Jevons_paradox
|
| [1] https://en.wikipedia.org/wiki/Wirth%27s_law
| carlmr wrote:
| >Not that that's necessarily a bad thing, if we're getting
| more than proportionally more utility out of the processors
|
| The trend seems to be that we get only a little bit of extra
| utility out of a lot of extra hardware performance.
|
| When the developer upgrades their PC it's easier for them to
| not notice performance issues. This creates the situation
| where every few years you need to buy a new PC to do the
| things you always did.
| philsnow wrote:
| > The trend seems to be that we get only a little bit of
| extra utility out of a lot of extra hardware performance.
|
| "hardware giveth, and software taketh away"
| _0ffh wrote:
| Nice! Another (slightly more pessimistic) one is
| "Software gets slower faster than hardware gets faster".
| ganzuul wrote:
| So by dropping support for old CPUs, the Linux kernel burns
| a bridge. That conversation makes more sense now.
| thebean11 wrote:
| If the developer doesn't notice the performance issues,
| maybe they move on to the next thing more quickly and get
| more done overall?
|
| I'm not sure if that's the case, but it may be we aren't
| looking for utility in the right places.
| avmich wrote:
| I'm sure a lot of developers upgrade their PCs (they were
| called workstations at a time) because of material problems
| - keyboards getting mechanically worse, and laptops can't
| easily get keyboard fixed, screens getting dead pixels,
| sockets getting loose, hard-to-find batteries getting less
| charge, and maybe some electronics degradation.
|
| Another reason is upgrades to software, which maintain
| general bloat, and which is hard to control; new hardware
| is easier. That's however is very noticeable.
|
| On top of that, just "better" hardware - say, in a decade
| one can have significantly better screen, more cores and
| memory, faster storage; makes easier for large software
| tasks (video transcoding, big rebuilds of whole toolchains
| and apps, compute-hungry apps like ML...)
| ryukafalz wrote:
| >laptops can't easily get keyboard fixed
|
| This is a frustrating part of recent laptops, but it
| doesn't have to be this way - my X230's keyboard is
| removable with a few screws.
| akiselev wrote:
| Is that the case for our highly centralized clouds? No
| one's putting a liquid nitrogen cooled desktop in their
| office so this type of hardware would be owned by companies
| who are financially incentivized to drive down the overhead
| costs of commoditized functionality like networking, data
| replication and storage, etc. leaving just inefficient
| developer logic which I assume is high value enough to
| justify it.
| root-z wrote:
| It's quite common in the cloud industry to trade hardware
| for shorter development cycle these days too. I think
| that's because there is still very high growth in the
| sector and companies all want to be the first offering
| feature X. As cloud services become more mature I expect
| people will be more cost sensitive. Though when that will
| happen I cannot say.
| winter_blue wrote:
| > Not that that's necessarily a bad thing, if we're getting
| more than proportionally more utility out of the processors,
| but I worry about that too
|
| I have two points to comment on this matter.
|
| Point 1: The only reason I would worry or be concerned about
| it is if we are using terribly-inefficient programming
| languages. There are languages (that need not be named) which
| are either 3x, 4x, 5x, 10x, or even 40x more inefficient than
| a language that has a performant JIT, or that targets native
| code. (Even JIT languages like JavaScript as still a lot less
| efficient because of dynamic typing. Also, in some popular
| complied-to-native languages, programmers tend to less
| efficient data structures, which results in lower performance
| as well.)
|
| Point 2: If the inefficiency arises out of _more actual
| computation_ being done, that 's a different story, and I AM
| TOTALLY A-OK with it. For instance, if Adobe Creative Suite
| uses a lot more CPU (and GPU) _in general_ even though it 's
| written in C++, that is likely because it's providing more
| functionality. I think even a 10% improvement in overall user
| experience and general functionality is worth increased
| computation. (For example, using ML to augment everything is
| wonderful, and we should be happy to expend more processing
| power for it.)
| moosebear847 wrote:
| In the future, I don't see why there's anything holding us
| back from splitting a bunch of atoms and having tons of cheap
| energy.
| VanillaCafe wrote:
| If it ever gets to home computing, it will get to data center
| computing far sooner. What does a world look like where data
| center computing is roughly 100x cheaper than home computing?
| valine wrote:
| Not much would change I imagine. For most tasks consumers care
| about low latency trumps raw compute power.
| cbozeman wrote:
| Dumb terminals everywhere. A huge upgrade of high-speed
| infrastructure across the US since everyone will need high
| throughput and low latency. Subscriptions will arise first, as
| people fucking love predictable monthly revenue - and by people
| I mean vulture capitalists, and to a lesser degree, risk-averse
| entrepreneurs (which is almost an oxymoron...), both of whom
| you can see I hold in low regard. Get ready for a "$39.99 mo.
| Office Productivity / Streaming / Web browsing" package", a
| "$59.99 PrO gAmEr package", and God knows what other kinds of
| disgusting segmentation.
|
| Someone, somewhere, will adopt a Ting-type model where you pay
| for your compute per cycle, or per trillion cycles or whatever,
| with a small connection fee per month. It'll be broken down
| into some kind of easy-to-understand gibberish bullshit for the
| normies.
|
| In short, it'll create another circle of Hell for everyone - at
| least initially.
| f1refly wrote:
| I really appreciate your pessimistic worldview, keep it up!
| cbozeman wrote:
| I basically just base my worldview on the fact that
| everyone is ultimately self-serving and selfish. Hasn't
| failed me yet. :)
| gpm wrote:
| Flexible dumb terminals everywhere. But we already have this
| with things like google stadia. Fast internet becomes more
| important. Tricks like vs code remote extensions to do realtime
| rendering locally but bulk compute (compiling in the case) on
| the server become more common. I don't think any of this
| results in radical changes from current technology.
| tiborsaas wrote:
| You could play video games on server farms and stream the
| output to your TV. You just need a $15 controller instead of a
| $1500 gaming PC.
|
| :)
| whatshisface wrote:
| It will look like the 1970s.
| andrelaszlo wrote:
| Not a physicist so I'm probably getting different concepts mixed
| up, but maybe someone could explain:
|
| > in principle, energy is not gained or lost from the system
| during the computing process
|
| Landauer's principle (from Wikipedia):
|
| > any logically irreversible manipulation of information, such as
| the erasure of a bit or the merging of two computation paths,
| must be accompanied by a corresponding entropy increase in non-
| information-bearing degrees of freedom of the information-
| processing apparatus or its environment
|
| Where is this information going, inside of the processor, if it's
| not turned into heat?
| aqme28 wrote:
| I was curious about this too. This chip is using adiabatic
| computing, which means your computations are reversible and
| therefore don't necessarily generate heat.
|
| I'm having trouble interpreting what exactly that means though.
| patagurbon wrote:
| But you have to save lots of undesirable info in order to
| maintain the reversibility right? Once you delete that don't
| you lose the efficiency gains?
| ladberg wrote:
| It's still getting turned into heat, just much less of it. The
| theoretical entropy increase required to run a computer is WAY
| less than current computers (and probably even the one in the
| article) generate so there is a lot of room to improve.
| abdullahkhalids wrote:
| If your computational gates are reversible [1], then in
| principle, energy is not converted to heat during the
| computational process, only interconverted between other forms.
| So, in principle, when you reverse the computation, you recover
| the entire energy you input into the system.
|
| However, in order to read out the output of computation, or to
| clear your register to prepare for new computation, you do
| generate heat energy and that is Landauer's principle.
|
| In other words, you can run a reversible computer back and
| forth and do as many computations as you want (imagine a
| perfect ball bouncing in a frictionless environment), as long
| as you don't read out the results of your computation.
|
| [1] NOT gate is reversible, and you can create reversible
| versions of AND and OR by adding some wires to store the input.
| tromp wrote:
| This microprocessor composed of some 20k Josephson junctions
| appears to be pure computational logic.
|
| In practice it will need to interface to external memory in order
| to perform (more) useful work.
|
| Would there be any problems fashioning memory cells out of
| Josephson junctions, so that the power savings can carry over to
| the system as a whole?
| faeyanpiraat wrote:
| If you compare cpu power usage with ram power usage, you'll see
| ram is already quite efficient, so even if traditional ram
| connected to the said microprocessor cannot be brought under
| the magic of this method, it might work.
|
| (Haven't read the article, or have any expertise in this field,
| so I might be wrong)
___________________________________________________________________
(page generated 2021-01-13 23:00 UTC)