https://spectrum.ieee.org/naukas-troubled-flight-t [ ] IEEE.orgIEEE Xplore Digital LibraryIEEE StandardsMore Sites Sign InJoin IEEE Nauka's Troubled Flight--Before It Tumbled the ISS Share FOR THE TECHNOLOGY INSIDER [ ] Explore by topic AerospaceArtificial IntelligenceBiomedicalComputingConsumer ElectronicsEnergyHistory of TechnologyRoboticsSemiconductorsSensors TelecommunicationsTransportation FOR THE TECHNOLOGY INSIDER Topics AerospaceArtificial IntelligenceBiomedicalComputingConsumer ElectronicsEnergyHistory of TechnologyRoboticsSemiconductorsSensors TelecommunicationsTransportation Sections FeaturesNewsOpinionCareersDIYEngineering Resources More Special ReportsExplainersPodcastsVideosNewslettersTop Programming LanguagesRobots Guide For IEEE Members The MagazineThe Institute For IEEE Members The MagazineThe Institute IEEE Spectrum About UsContact UsReprints & PermissionsAdvertising Follow IEEE Spectrum Support IEEE Spectrum IEEE Spectrum is the flagship publication of the IEEE -- the world's largest professional organization devoted to engineering and applied sciences. Our articles, podcasts, and infographics inform our readers about developments in technology, engineering, and science. Join IEEE Subscribe About IEEEContact & SupportAccessibilityNondiscrimination PolicyTerms IEEE Privacy Policy (c) Copyright 2021 IEEE -- All rights reserved. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy. view privacy policy accept & close Enjoy more free content and benefits by creating an account Saving articles to read later requires an IEEE Spectrum account The Institute content is only available for members Downloading full PDF issues is exclusive for IEEE Members Access to Spectrum's Digital Edition is exclusive for IEEE Members Following topics is a feature exclusive for IEEE Members Adding your response to an article requires an IEEE Spectrum account Create an account to access more content and features on IEEE Spectrum, including the ability to save articles to read later, download Spectrum Collections, and participate in conversations with readers and editors. For more exclusive content and features, consider Joining IEEE. Join the world's largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum's articles, archives, PDF downloads, and other benefits. Learn more - CREATE AN ACCOUNTSIGN IN JOIN IEEESIGN IN Enjoy more free content and benefits by creating an account Create an account to access more content and features on IEEE Spectrum, including the ability to save articles to read later, download Spectrum Collections, and participate in conversations with readers and editors. For more exclusive content and features, consider Joining IEEE. CREATE AN ACCOUNTSIGN IN Type News Topic Aerospace Nauka's Troubled Flight--Before It Tumbled the ISS Russian space station module revelations--and a movie--raise questions James Oberg 5h 7 min read Large square solar panels stick out from cylindrical white and brown modules in space over a blue and white planet Earth Forward view of Nauka from the International Space Station's Cupola, shortly after Nauka's docking on 29 July 2021. Shane Kimbrough/NASA space flight iss nasa international space station russia This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE. Any hopes that the space agencies in Houston and Moscow had for tamping down public concerns over the International Space Station's recent tumble in orbit were lost last week as new revelations from Moscow confirmed worst-case rumors. The ISS's tumble was caused by the inadvertent firing of maneuvering thrusters on the newly arrived Nauka Russian research module (referred to as the MLM module by NASA). But it's become clear that the module had been lurching from crisis to crisis during its weeks-long flight before it rendezvoused and docked at the space station. This is raising concerns about exactly how much NASA knew and when, given the stringent safety requirements normally in place that any visiting vehicle must meet before being allowed to approach the station. This and other questions have been raised as the last two weeks have seen a remarkable and surprising degree of Russian openness, especially as compared to NASA's. Some of that transparency has also surfaced an interesting coincidence (at minimum) involving a spaceflight-themed movie potentially being filmed aboard Nauka that at least complicates but also perhaps begins to explain some of the curious components of this near-disaster's chronology. Here's the outline: First, Alexander Khokhlov, a Russian space expert and member of the private Russian Federation of Cosmonautics, had told the RIA Novosti news agency that several emergency situations had occurred on the Nauka during the flight to the ISS, but that Russian specialists managed to cope with "most" of them. According to him, systems that had significant problems included the infrared sensors which determine the local horizon, the radar antenna that feeds into the automated Kurs rendezvous system, and the Kurs system itself. He also had described a "severe emergency" with the propulsion system. A number of these failures were subsequently confirmed by the European Space Agency while NASA remained silent. Then on August 7, Dmitry Rogozin, General Director of Roscosmos, spoke with RIA Novosti about the problems of building the Nauka space module. And on the YouTube channel "Soloviev LIVE" (typically noted for its hosts hewing to the official government line), Rogozin singled out the shutting down of a Ukrainian aerospace factory as creating "predictable difficulties in the flight" of Nauka. In Soviet days, this factory used to make an accordion-like bellows used in the propellant tanks to separate the pressurizing gas from the liquid fuel as it was pushed into the engines. In Rogozin's words, "We understood that we would have to spend, in fact, all eight days in manual control of both the flight of this module and the docking. And indeed we had problems there" While the exact details are unclear, it looks like the Russians were worried that accidental leaks across the propellant/pressurant barrier would frustrate automatic real-time management of propellant flow into the module's rocket engines, and instead required direct valve commanding from ground stations. When asked about such reports last week, a NASA spokesman in Houston had simply said that "Roscosmos regularly updated NASA and the rest of the international partners on MLM's progress during the approach to station" but gave no details and referred all inquiries about Russian hardware issues to Moscow. "We would point you to Roscosmos for any specifics on MLM systems/performance/procedures." Large screens show a blue and green world map and close ups of space vehicles in front of rows of people in front of computer screens. Moscow Mission Control CenterRoscosmos On August 13, RIA Novosti reported that 61-year-old Deputy General Designer of the Energia Rocket and Space Corporation Alexander Kuznetsov, the senior Russian space official in direct charge of the Nauka module, had been hospitalized with a stroke immediately after the docking. He was, however, soon released--although a few days later was hospitalized again. The agency attributed the stroke to "the colossal tension" and that "Kuznetsov, along with other specialists and members of the state commission, spent all eight days of the module's flight at the Mission Control Center, practically without leaving the premises." On August 14, RIA Novosti confirmed that "mass failures of the systems of the Nauka module... arose after it was put into low-earth orbit and threatened a serious emergency." But the story was upbeat: According to a "source in the rocket and space industry," these problems "were eliminated thanks to the continuous work of ground specialists for eight days, the revision of the module's flight task and the creation of an emergency working group of the best experts in the industry." The story's chronology of challenges was daunting: "The main problems of the first two days of the flight of the Nauka module were: the failure of the flight program and the operation of one of the fuel valves, the problem of transmitting the command package on board from the ground measuring complexes, the absence of a signal from two sensors of the infrared vertical [sensor] and from one of the two star sensors." The story described how Mission Control Center director Vladimir Solovyov immediately reported on the critical situation to the general director of the Roskosmos state corporation, Dmitry Rogozin, who took direct control of the module's flight. Communication between the Moscow Mission Control Center--"TsUP" in Russian--was too uncertain, so "engineers of the Russian Space Systems holding were promptly dispatched to all ground measuring points, who coped with the task of stable transmission of commands to the module and receiving telemetric information from it." On July 23, a working group was created by Rogozin to save the troubled module. The group was headed by Sergey Kuznetsov, General Designer of the Salyut Design Bureau and included representatives of the Keldysh Center, the developers of Nauka. Starting from July 25, the main and backup sets of the Kurs rendezvous and docking system were successfully tested, the fuel reserves required for the rendezvous were recalculated, a new docking scheme was calculated taking into account the strength of the station and the module (the maximum docking speed was limited to 8 centimeters per second), and the stable operation of both star sensors, responsible for the exact orientation of the Nauka, was restored. These ad-hoc fixes raise the issue of how much did those rushed and admittedly often poorly coordinated ground station commanding and flight software reprogramming initiatives themselves contribute to the potential for onboard "software glitches" such as the still-undefined one now blamed for the renegade thruster firing that tumbled the station? And what is the actual current status and residual content level of the propellant tanks aboard Nauka, given the official descriptions of major monitoring function loss during the rendezvous maneuvers? Large square solar panels stick out from cylindrical white and brown modules in space. An inset black and white screen shows numerical information. Roscosmos In any case, the parade of details of the problems overcome during the pre-docking phase of the mission stands in stark contrast to the Russian press treatment of the post-docking thruster firing incident. On August 4 there had been one interview with former cosmonaut Sergei Krikalev, executive director for manned programs of Roscosmos. on Russia 24 TV channel: "The module, apparently, itself could not believe that it had already docked, so when the control system of the module was [reinitialized], the control system decided that it was still in free flight--and, not understanding what was happening, for safety, an algorithm was triggered, turning on the motors ... This, of course, should not have happened. The commission is now examining the reasons for this.... The station is a rather delicate device ... Everything was done as lightly as possible. And the additional load causes a load on the [motor] drives of solar batteries, on the [frames] on which everything is installed.... This is an emergency situation that will need to be analyzed in detail... There are probably no damages ... Nothing broke off from the station, I can reassure you, but the extent to which we have loaded the station, what are the consequences, it will now be assessed by experts." But, aside from these candid comments from Krikalev, the thruster firing became a non-event, except in brief press references to a short interlude in which the "station temporarily lost its orientation." That wording, more suggestive of an addled old man who felt dizzy than of an enormous structure doing a full tumble and a half with counter-thrusting rocket engines shoving at it in totally unexpected directions, recalled the laconic NASA press release after the near-catastrophic Mir fire in 1997: "Small Fire Put Out on Mir." NASA's narrative-control lid in 1997 was so tight that Jerry Linenger --who'd been aboard Mir in 1997 and considered the incident a very narrow brush with death--later recalled how he was forced to send accurate accounts of that emergency to his wife via a data stick carried by a returning German visiting cosmonaut, since he knew all official messages (including family emails) were being monitored. Perhaps an echo of that NASA policy is detectable today: Since the Nauka docking, nobody on the US side--three US crew members, a French astronaut, and the Japanese station commander--has been seen to tweet any mention of the dramatic tumble and recovery on docking day. Their public message traffic looks as if the incident never happened. As the month of August passes, parallel review boards in the United States and Russia are at work behind closed doors. On August 9, NASA ISS program manager Joel Montalbano told journalist Jeff Foust on a Facebook discussion thread that it's a "little too early" to set a timeline for the investigation. NASA is in "regular communications" with Russian colleagues on this, he said. Montalbano also told US specialist on the Russian space program Marcia Smith that they "may have more to say in 2-3 weeks." If the dramatic launch and trouble-plagued rendezvous of Nauka looks slapdash premature--a bizarre notion for a feat that was originally planned for fifteen years ago--there is one intriguingly suggestive schedule-driver that is only weeks in the future. A routine launch of the next long-term Russian crew had long been slated for early October. Called "Soyuz MS-19," it was to carry three professional Russian cosmonauts who had been training for at least a year. But several months ago there was a redirection of the mission and the crew. Two of the three cosmonauts were bumped from the mission and replaced by a movie actor and a director/cameraman, as part of a commercial project to make a spaceflight-themed movie in space. The project reportedly has high level backing by powerful figures in Moscow, including in the Kremlin, as well as overseas investors. Even more significant than political favoritism, however, is the simple question of cash. Since the mid-1990s, the influx of foreign funding for the Russian space industry has been a cash cow for space program officials and their political protectors. Aside from rented official approvals, this first-of-its-kind movie project has been developed and the scene lists tailored specifically to the Nauka module. Nauka contains the living quarters for the extra visitors, the laboratory unit to simulate an in-space operating room (the movie's main theme), and high-quality viewports for spectacular imagery of Earth below. The potential relevance for any putative urgency to launch Nauka, ready or not, is that it had to occur at least several weeks before this MS-19 mission, or the wrong people would have been aboard the Soyuz, and long-term crew activity planning was not subject to revision. The choice might have been go now, or wait another year for the cash commissions the movie project would have generated. Or the timing could just be a coincidence, just one more unanswered question in an orbital drama of mystery and misdirection. From Your Site Articles * ISS Astronauts Operating Remote Robots Show Future of Planetary ... > * NASA Saves Big on Fuel in ISS Rotation - IEEE Spectrum > * Space Station Incident Demands Independent Investigation - IEEE ... > Related Articles Around the Web * Spot The Station | NASA > * International Space Station | NASA > space flight iss nasa international space station russia James Oberg James Oberg is a retired "rocket scientist" in Texas, after a 20+ year career in NASA Mission Control and subsequently an on-air space consultant for ABC News and then NBC News. The author of a dozen books and hundreds of magazine articles on the past, present, and potential future of space exploration, he has reported from space launch and operations centers across the United States and Russia and North Korea. The Conversation (0) An American Flag in the foreground with water and wind turbines in the distance. Type News Topic Energy 80x30: U.S. Clean Energy Standard Accelerates Transition to Renewables 4h 3 min read Image of the IEEE Spectrum News article with article callouts. Type News Topic Transportation Magazine The Top 10 EV Battery Makers 25 Aug 2021 4 min read Astrobee Will Find Astronauts' Lost Socks Type News Topic Robotics Astrobee Will Find Astronauts' Lost Socks 25 Aug 2021 3 min read Related Stories Type Feature Topic Robotics Magazine An Army of Grain-harvesting Robots Marches Across Russia Type News Topic Aerospace Energy Solar Power from Space? Caltech's $100 Million Gambit Robotics Type Interview Topic Aerospace JPL's Plan for the Next Mars Helicopter Type Feature Topic Semiconductors Magazine Next-Gen Chips Will Be Powered From Below Buried interconnects will help save Moore's Law Brian Cline Divya Prasad Eric Beyne Odysseas Zografos 5h 9 min read Horizontal Image of data and power processors functions graphic with a dark background. Chris Philpot DarkGray For a time, each new processor churned out more waste heat than the last. Had these chips kept on the trajectory they were following in the early 2000s, they would soon have packed about 6,400 watts onto each square centimeter--the power flux on the surface of the sun. Things never got that bad because engineers worked to hold down chip power consumption. Data-center system-on-chip (SoC) designs are consistently second only to supercomputer processors in terms of performance, yet they typically consume only about 200 to 400 watts per square centimeter. The chip encased inside that smartphone in your pocket typically draws around 5 W. Nevertheless, while computer chips won't burn a literal hole in your pocket (though they do get hot enough to fry an egg), they still require a lot of current to run the applications we use every day. Consider the data-center SoC: On average, it's consuming 200 W to provide its transistors with about 1 to 2 volts, which means the chip is drawing 100 to 200 amperes of current from the voltage regulators that supply it. Your typical refrigerator draws only 6 A. High-end mobile phones can draw a tenth as much power as data-center SoCs, but even so that's still about 10-20 A of current. That's up to three refrigerators, in your pocket! Delivering that current to billions of transistors is quickly becoming one of the major bottlenecks in high-performance SoC design. As transistors continue to be made tinier, the interconnects that supply them with current must be packed ever closer and be made ever finer, which increases resistance and saps power. This can't go on: Without a big change in the way electrons get to and from devices on a chip, it won't matter how much smaller we can make transistors. Image of data and power processors functions graphic. In today's processors both signals and power reach the silicon [light gray] from above. New technology would separate those functions, saving power and making more room for signal routes [right].Chris Philpot Fortunately, we have a promising solution: We can use a side of the silicon that's long been ignored. Electrons have to travel a long way to get from the source that is generating them to the transistors that compute with them. In most electronics they travel along the copper traces of a printed circuit board into a package that holds the SoC, through the solder balls that connect the chip to the package, and then via on-chip interconnects to the transistors themselves. It's this last stage that really matters. To see why, it helps to understand how chips are made. An SoC starts as a bare piece of high-quality, crystalline silicon. We first make a layer of transistors at the very top of that silicon. Next we link them together with metal interconnects to form circuits with useful computing functions. These interconnects are formed in layers called a stack, and it can take a 10-to-20-layer stack to deliver power and data to the billions of transistors on today's chips. Those layers closest to the silicon transistors are thin and small in order to connect to the tiny transistors, but they grow in size as you go up in the stack to higher levels. It's these levels with broader interconnects that are better at delivering power because they have less resistance. Graphic of power and data transistors from a network above the silicon. Today, both power and signals reach transistors from a network of interconnects above the silicon (the "front side"). But increasing resistance as these interconnects are scaled down to ever-finer dimensions is making that scheme untenable.Chris Philpot You can see, then, that the metal that powers circuits--the power delivery network (PDN)--is on top of the transistors. We refer to this as front-side power delivery. You can also see that the power network unavoidably competes for space with the network of wires that delivers signals, because they share the same set of copper resources. In order to get power and signals off of the SoC, we typically connect the uppermost layer of metal--farthest away from the transistors--to solder balls (also called bumps) in the chip package. So for electrons to reach any transistor to do useful work, they have to traverse 10 to 20 layers of increasingly narrow and tortuous metal until they can finally squeeze through to the very last layer of local wires. This way of distributing power is fundamentally lossy. At every stage along the path, some power is lost, and some must be used to control the delivery itself. In today's SoCs, designers typically have a budget that allows loss that leads to a 10 percent reduction in voltage between the package and the transistors. Thus, if we hit a total efficiency of 90 percent or greater in a power-delivery network, our designs are on the right track. Historically, such efficiencies have been achievable with good engineering--some might even say it was easy compared to the challenges we face today. In today's electronics, SoC designers not only have to manage increasing power densities but to do so with interconnects that are losing power at a sharply accelerating rate with each new generation. You can design a back-side power delivery network that's up to seven times as efficient as the traditional front-side network. The increasing lossiness has to do with how we make nanoscale wires. That process and its accompanying materials trace back to about 1997, when IBM began to make interconnects out of copper instead of aluminum, and the industry shifted along with it. Up until then aluminum wires had been fine conductors, but in a few more steps along the Moore's Law curve their resistance would soon be too high and become unreliable. Copper is more conductive at modern IC scales. But even copper's resistance began to be problematic once interconnect widths shrank below 100 nanometers. Today, the smallest manufactured interconnects are about 20 nm, so resistance is now an urgent issue. It helps to picture the electrons in an interconnect as a full set of balls on a billiards table. Now imagine shoving them all from one end of the table toward another. A few would collide and bounce against each other on the way, but most would make the journey in a straight-ish line. Now consider shrinking the table by half--you'd get a lot more collisions and the balls would move more slowly. Next, shrink it again and increase the number of billiard balls tenfold, and you're in something like the situation chipmakers face now. Real electrons don't collide, necessarily, but they get close enough to one another to impose a scattering force that disrupts the flow through the wire. At nanoscale dimensions, this leads to vastly higher resistance in the wires, which induces significant power-delivery loss. Increasing electrical resistance is not a new challenge, but the magnitude of increase that we are seeing now with each subsequent process node is unprecedented. Furthermore, traditional ways of managing this increase are no longer an option, because the manufacturing rules at the nanoscale impose so many constraints. Gone are the days when we could arbitrarily increase the widths of certain wires in order to combat increasing resistance. Now designers have to stick to certain specified wire widths or else the chip may not be manufacturable. So, the industry is faced with the twin problems of higher resistance in interconnects and less room for them on the chip. There is another way: We can exploit the "empty" silicon that lies below the transistors. At Imec, where authors Beyne and Zografos work, we have pioneered a manufacturing concept called "buried power rails," or BPR. The technique builds power connections below the transistors instead of above them, with the aim of creating fatter, less resistant rails and freeing space for signal-carrying interconnects above the transistor layer. Image of transistors tapping power rails buried within the silicon. To reduce the resistance in power delivery, transistors will tap power rails buried within the silicon. These are relatively large, low-resistance conductors that multiple logic cells could connect with.Chris Philpot To build BPRs, you first have to dig out deep trenches below the transistors and then fill them with metal. You have to do this before you make the transistors themselves. So the metal choice is important. That metal will need to withstand the processing steps used to make high-quality transistors, which can reach about 1,000 degC. At that temperature, copper is molten, and melted copper could contaminate the whole chip. We've therefore experimented with ruthenium and tungsten, which have higher melting points. Since there is so much unused space below the transistors, you can make the BPR trenches wide and deep, which is perfect for delivering power. Compared to the thin metal layers directly on top of the transistors, BPRs can have 1/20 to 1/30 the resistance. That means that BPRs will effectively allow you to deliver more power to the transistors. Furthermore, by moving the power rails off the top side of the transistors you free up room for the signal-carrying interconnects. These interconnects form fundamental circuit "cells"--the smallest circuit units, such as SRAM memory bit cells or simple logic that we use to compose more complex circuits. By using the space we've freed up, we could shrink those cells by 16 percent or more, and that could ultimately translate to more transistors per chip. Even if feature size stayed the same, we'd still push Moore's Law one step further. Unfortunately, it looks like burying local power rails alone won't be enough. You still have to convey power to those rails down from the top side of the chip, and that will cost efficiency and some loss of voltage. Gone are the days when we could arbitrarily increase the widths of certain wires in order to combat increasing resistance. Researchers at Arm, including authors Cline and Prasad, ran a simulation on one of their CPUs and found that, by themselves, BPRs could allow you to build a 40 percent more efficient power network than an ordinary front-side power delivery network. But they also found that even if you used BPRs with front-side power delivery, the overall voltage delivered to the transistors was not high enough to sustain high-performance operation of a CPU. Luckily, Imec was simultaneously developing a complementary solution to further improve power delivery: Move the entire power-delivery network from the front side of the chip to the back side. This solution is called "back-side power delivery," or more generally "back-side metallization." It involves thinning down the silicon that is underneath the transistors to 500 nm or less, at which point you can create nanometer-size "through-silicon vias," or nano-TSVs. These are vertical interconnects that can connect up through the back side of the silicon to the bottom of the buried rails, like hundreds of tiny mineshafts. Once the nano-TSVs have been created below the transistors and BPRs, you can then deposit additional layers of metal on the back side of the chip to form a complete power-delivery network. Expanding on our earlier simulations, we at Arm found that just two layers of thick back-side metal was enough to do the job. As long as you could space the nano-TSVs closer than 2 micrometers from each other, you could design a back-side PDN that was four times as efficient as the front-side PDN with buried power rails and seven times as efficient as the traditional front-side PDN. The back-side PDN has the additional advantage of being physically separated from the signal network, so the two networks no longer compete for the same metal-layer resources. There's more room for each. It also means that the metal layer characteristics no longer need to be a compromise between what power routes prefer (thick and wide for low resistance) and what signal routes prefer (thin and narrow so they can make circuits from densely packed transistors). You can simultaneously tune the back-side metal layers for power routing and the front-side metal layers for signal routing and get the best of both worlds. Image of a power delivery networks on the other side of the silicon, the "back side". Moving the power delivery network to the other side of the silicon--the "back side"--reduces voltage loss even more, because all the interconnects in the network can be made thicker to lower resistance. What's more, removing the power-delivery network from above the silicon leaves more room for signal routes, leading to even smaller logic circuits and letting chipmakers squeeze more transistors into the same area of silicon. Chris Philpot/IMEC In our designs at Arm, we found that for both the traditional front-side PDN and front-side PDN with buried power rails, we had to sacrifice design performance. But with back-side PDN the CPU was able to achieve high frequencies and have electrically efficient power delivery. You might, of course, be wondering how you get signals and power from the package to the chip in such a scheme. The nano-TSVs are the key here, too. They can be used to transfer all input and output signals from the front side to the back side of the chip. That way, both the power and the I/O signals can be attached to solder balls that are placed on the back side. Simulation studies are a great start, and they show the CPU-design-level potential of back-side PDNs with BPR. But there is a long road ahead to bring these technologies to high-volume manufacturing. There are still significant materials and manufacturing challenges that need to be solved. The best choice of metal materials for the BPRs and nano-TSVs is critical to manufacturability and electrical efficiency. Also, the high-aspect-ratio (deep but skinny) trenches needed for both BPRs and nano-TSVs are very difficult to make. Reliably etching tightly spaced, deep-but-narrow features in the silicon substrate and filling them with metal is relatively new to chip manufacture and is still something the industry is getting to grips with. Developing manufacturing tools and methods that are reliable and repeatable will be essential to unlocking widespread adoption of nano-TSVs. Furthermore, battery-powered SoCs, like those in your phone and in other power-constrained designs, already have much more sophisticated power-delivery networks than those we've discussed so far. Modern-day power delivery separates chips into multiple power domains that can operate at different voltages or even be turned off altogether to conserve power. (See " A Circuit to Boost Battery Life," IEEE Spectrum, August 2021.) Image of a chart showing data about power and performance versus voltage loss. In tests of multiple designs using three varieties of power delivery, only back-side power with buried power rails [red] provides enough voltage without compromising performance.Chris Philpot Thus, back-side PDNs and BPRs are eventually going to have to do much more than just efficiently deliver electrons. They're going to have to precisely control where electrons go and how many of them get there. Chip designers will not want to take multiple steps backward when it comes to chip-level power design. So we will have to simultaneously optimize design and manufacturing to make sure that BPRs and back-side PDNs are better than--or at least compatible with--the power-saving IC techniques we use today. The future of computing depends upon these new manufacturing techniques. Power consumption is crucial whether you're worrying about the cooling bill for a data center or the number of times you have to charge your smartphone each day. And as we continue to shrink transistors and ICs, delivering power becomes a significant on-chip challenge. BPR and back-side PDNs may well answer that challenge if engineers can overcome the complexities that come with them. This article appears in the September 2021 print issue as "Power From Below." Keep Reading | Show less