https://spectrum.ieee.org/space-station-accident-needs-independant-investigation [ ] IEEE.orgIEEE Xplore Digital LibraryIEEE StandardsMore Sites Sign InJoin IEEE Space Station Incident Demands Independent Investigation Share FOR THE TECHNOLOGY INSIDER [ ] Explore by topic AerospaceArtificial IntelligenceBiomedicalComputingConsumer ElectronicsEnergyHistory of TechnologyRoboticsSemiconductorsSensors TelecommunicationsTransportation FOR THE TECHNOLOGY INSIDER Topics AerospaceArtificial IntelligenceBiomedicalComputingConsumer ElectronicsEnergyHistory of TechnologyRoboticsSemiconductorsSensors TelecommunicationsTransportation Sections FeaturesNewsOpinionCareersDIYEngineering Resources More Special ReportsExplainersPodcastsVideosNewslettersTop Programming LanguagesRobots Guide For IEEE Members The MagazineThe Institute For IEEE Members The MagazineThe Institute IEEE Spectrum About UsContact UsReprints & PermissionsAdvertising Follow IEEE Spectrum Support IEEE Spectrum IEEE Spectrum is the flagship publication of the IEEE -- the world's largest professional organization devoted to engineering and applied sciences. Our articles, podcasts, and infographics inform our readers about developments in technology, engineering, and science. Join IEEE Subscribe About IEEEContact & SupportAccessibilityNondiscrimination PolicyTerms IEEE Privacy Policy (c) Copyright 2021 IEEE -- All rights reserved. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy. view privacy policy accept & close Enjoy more free content and benefits by creating an account Saving articles to read later requires an IEEE Spectrum account The Institute content is only available for members Downloading full PDF issues is exclusive for IEEE Members Access to Spectrum's Digital Edition is exclusive for IEEE Members Following topics is a feature exclusive for IEEE Members Adding your response to an article requires an IEEE Spectrum account Create an account to access more content and features on IEEE Spectrum, including the ability to save articles to read later, download Spectrum Collections, and participate in conversations with readers and editors. For more exclusive content and features, consider Joining IEEE. Join the world's largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum's articles, archives, PDF downloads, and other benefits. Learn more - CREATE AN ACCOUNTSIGN IN JOIN IEEESIGN IN Enjoy more free content and benefits by creating an account Create an account to access more content and features on IEEE Spectrum, including the ability to save articles to read later, download Spectrum Collections, and participate in conversations with readers and editors. For more exclusive content and features, consider Joining IEEE. CREATE AN ACCOUNTSIGN IN Type News Topic Aerospace Space Station Incident Demands Independent Investigation A space expert warns NASA's safety culture may be eroding again James Oberg 06 Aug 2021 7 min read Russia's "Nauka" Multipurpose Laboratory Module is pictured shortly after docking to the Zvezda service module's Earth-facing port on the International Space Station, with the Brazilian coast 263 miles below. In the foreground is the Soyuz MS-18 crew ship docked to the Rassvet module on 29 July 2021. NASA space flight This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE. In an International Space Station major milestone more than fifteen years in the making, a long-delayed Russian science laboratory named Nauka automatically docked to the station on 29 July, prompting sighs of relief in the Mission Control Centers in Houston and Moscow. But within a few hours, it became shockingly obvious the celebrations were premature, and the ISS was coming closer to disaster than at anytime in its nearly 25 years in orbit. While the proximate cause of the incident is still being unravelled, there are worrisome signs that NASA may be repeating some of the lapses that lead to the loss of the Challenger and Columbia space shuttles and their crews. And because political pressures seem to be driving much of the problem, only an independent investigation with serious political heft can reverse any erosion in safety culture. Let's step back and look at what we know happened: In a cyber-logical process still not entirely clear, while passing northwest to southeast over Indonesia, the Nauka module's autopilot apparently decided it was supposed to fly away from the station. Although actually attached, and with the latches on the station side closed, the module began trying to line itself up in preparation to fire its main engines using an attitude adjustment thruster. As the thruster fired, the entire station was slowly dragged askew as well. Since the ISS was well beyond the coverage of Russian ground stations, and since the world-wide Soviet-era fleet of tracking ships and world-circling network of "Luch" relay comsats had long since been scrapped, and replacements were slow in coming, nobody even knew Nauka was firing its thruster, until a slight but growing shift in the ISS's orientation was finally detected by NASA. Russia's "Nauka" Multipurpose Laboratory Module approaches the International Space Station for docking. Nauka approaches the space station, preparing to dock on 29 July 2021. NASA Within minutes, the Flight Director in Houston declared a "spacecraft emergency"--the first in the station's lifetime--and his team tried to figure out what could be done to avoid the ISS spinning up so fast that structural damage could result. The football-field-sized array of pressurized modules, support girders, solar arrays, radiator panels, robotic arms, and other mechanisms was designed to operate in a weightless environment. But it was also built to handle stresses both from directional thrusting (used to boost the altitude periodically) and rotational torques (usually to maintain a horizon-level orientation, or to turn to a specific different orientation to facilitate arrival or departure of visiting vehicles). The juncture latches that held the ISS's module together had been sized to accommodate these forces with a comfortable safety margin, but a maneuver of this scale had never been expected. Meanwhile, the station's automated attitude control system had also noted the deviation and began firing other thrusters to countermand it. These too were on the Russian half of the station. The only US orientation-control system is a set of spinning flywheels that gently turn the structure without the need for thruster propellant, but which would have been unable to cope with the unrelenting push of Nauka's thruster. Later mass-media scenarios depicted teams of specialists manually directing on-board systems into action, but the exact actions taken in response still remain unclear--and probably were mostly if not entirely automatic. The drama continued as the station crossed the Pacific, then South America and the mid-Atlantic, finally entering Russian radio contact over central Europe an hour after the crisis had begun. By then the thrusting had stopped, probably when the guilty thruster exhausted its fuel supply. The sane half of the Russian segment then restored the desired station orientation. Initial private attempts to use telemetry data to visually represent the station's tumble that were posted online looked bizarre, with enormous rapid gyrations in different directions. Mercifully, the truth of the situation is that the ISS went through a simple long-axis spin of one and a half full turns, and then a half turn back to the starting alignment. The jumps and zig-zags were computational artifacts of the representational schemes used by NASA, which relate to the concept of "gimbal lock" in gyroscopes. How close the station had come to disaster is an open question, and the flight director humorously alluded to it in a later tweet that he'd never been so happy as when he saw on external TV cameras that the solar arrays and radiators were still standing straight in place. And any excessive bending stress along docking interfaces between the Russian and American segments would have demanded quick leak checks. But even if the rotation was "simple," the undeniably dramatic event has both short term and long-term significance for the future of the space station. And it has antecedents dating back to the very birth of the ISS in 1997. How close the ISS had come to disaster is still an open question. At this point, unfortunately, is when the human misjudgments began to surface. To calm things down, official NASA spokesmen provided very preliminary underestimates in how big and how fast the station's spin had been. These were presented without any caveat that the numbers were unverified--and the real figures turned out to be much worse. The Russian side, for its part, dismissed the attitude deviation as a routine bump in a normal process of automatic docking and proclaimed there would be no formal incident investigation, especially any that would involve their American partners. Indeed, both sides seemed to agree that the sooner the incident was forgotten, the better. As of now, the US side is deep into analysis of induced stresses on critical ISS structures, with the most important ones, such as the solar arrays, first. Another standard procedure after this kind of event is to assess potential indicators of stress-induced damage, especially in terms of air leaks, and where best to monitor cabin pressure and other parameters to detect any such leaks. The bureaucratic instinct to minimize the described potential severity of the event needs cold-blooded assessment. Sadly, from past experience, this mindset of complacency and hoping for the best is the result of natural human mental drift that comes when there are long periods of apparent normalcy. Even if there is a slowly emerging problem, as long as everything looks okay in the day to day, the tendency is ignore warning signals as minor perturbations. The safety of the system is assumed rather than verified--and consequently managers are led into missing clues, or making careless choices, that lead to disaster. So these recent indications of this mental attitude about the station's attitude are worrisome. The NASA team has experienced that same slow cultural rot of assuming safety several times over the past decades, with hideous consequences. Team members in the year leading up to the 1986 Challenger disaster (and I was deep within the Mission Control operations then) had noticed and begun voicing concerns over growing carelessness and even humorous reactions to occasional "stupid mistakes," without effect. Then, after imprudent management decisions, seven people died. The same drift was noticed in the late 1990s, especially in the joint US/Russian operations on Mir and on early ISS flights. It led to the forced departure of a number of top NASA officials, who had objected to the trend that was being imposed by the White House's post-Cold War diplomatic goals, implemented by NASA Administrator Dan Goldin. Safety took a decidedly secondary priority to international diplomatic value. Legendary Mission Control leader Gene Kranz described the decisions that were made in the mid-1990s over his own objections, objections that led to his sudden departure from NASA. "Russia was subsequently assigned partnership responsibilities for critical in-line tasks with minimal concern for the political and technical difficulties as well as the cost and schedule risks," he wrote in 1999. "This was the first time in the history of US manned space flight that NASA assigned critical path, in-line tasks with little or no backup." By 2001-'02, the results were as Kranz and his colleagues had warned. "Today's problems with the space station are the product of a program driven by an overriding political objective and developed by an ad hoc committee, which bypassed NASA's proven management and engineering teams," he concluded. To reverse the apparent new cultural drift, NASA headquarters or some even higher office is going to have to intervene. By then the warped NASA management culture that soon enabled the Columbia disaster in 2003 was fully in place. Some of the wording in current management proclamations regarding the Nauka docking have an eerie ring of familiarity. "Space cooperation continues to be a hallmark of U.S.-Russian relations and I have no doubt that our joint work reinforces the ties that have bound our collaborative efforts over the many years" wrote NASA Director Bill Nelson to Dmitry Rogozin, head of the Russian space agency, on July 31. There was no mention of the ISS's first declared spacecraft emergency, nor any dissatisfaction with Russian contribution to it. To reverse the apparent new cultural drift, and thus potentially forestall the same kind of dismal results as before, NASA headquarters or some even higher office is going to have to intervene. The causes of the Nauka-induced "space sumo match" of massive cross-pushing bodies need to be determined and verified. And somebody needs to expose the decision process that allowed NASA to approve the ISS docking of a powerful thruster-equipped module without the on-site real-time capability to quickly disarm that system in an emergency. Because the apparent sloppiness of NASA's safety oversight on visiting vehicles looks to be directly associated with maintaining good relations with Moscow, the driving factor seems to be White House diplomatic goals--and that's the level where a corrective impetus must originate. With a long-time U.S. Senate colleague, Nelson, recently named head of NASA, President Biden is well connected to issue such guidance for a thorough investigation by an independent commission, followed by implementation of needed reforms. The buck stops with him. As far as Nauka's role in this process of safety-culture repair, it turns out that quite by bizarre coincidence, a similar pattern was played out by the very first Russian launch that inaugurated the ISS program, the 'Zarya' module [called the 'FGB'] in late 1997. Nauka turns out to be the repeatedly rebuilt and upgraded backup module for that very launch, and the parallels are remarkable. The day the FGB was launched, on 23 November 1997, the mission faced disaster when it refused to accept ground commands to raise its original atmosphere-skimming parking orbit. As it crossed over Russian ground sites, controllers in Moscow sent commands, and the spacecraft didn't answer. Meanwhile, NASA guests at a nearby facility were celebrating with Russian colleagues as nobody told them of the crisis. Finally, on the last available in-range pass, controllers tried a new command format that the onboard computer did recognize and acknowledge. The mission--and the entire ISS project--was saved, and the American side never knew. Only years later did the story appear in Russian newspapers. Still, for all its messy difficulties and frustrating disappointments, the U.S./Russian partnership turned out to be a remarkably robust "mutual co-dependence" arrangement, when managed with "tough love." Neither side really had practical alternatives if it wanted a permanent human presence in space, and they still don't--so both teams were devoted to making it work. And it could still work--if NASA keeps faith with its traditional safety culture and with the lives of those astronauts who died in the past because NASA had failed them. Postscript: As this story was going to press, a NASA spokesperson responded to queries about the incident saying: As shared by NASA's Kathy Lueders and Joel Montalbano in the media telecon following the event, Roscosmos regularly updated NASA and the rest of the international partners on MLM's progress during the approach to station. We continue to have confidence in our partnership with Roscosmos to operate the International Space Station. When the unexpected thruster firings occurred, flight control teams were able to enact contingency procedures and return the station to normal operations within an hour. We would point you to Roscosmos for any specifics on Russian systems/ performance/procedures. space flight James Oberg James Oberg is a retired "rocket scientist" in Texas, after a 20+ year career in NASA Mission Control and subsequently an on-air space consultant for ABC News and then NBC News. The author of a dozen books and hundreds of magazine articles on the past, present, and potential future of space exploration, he has reported from space launch and operations centers across the United States and Russia and North Korea. The Conversation (0) Reviews: Type News Topic Robotics Video Friday: Robot Opera 06 Aug 2021 3 min read The Institute Type Article Topic This Collection Teaches Engineers How to Protect Data 06 Aug 2021 3 min read Type News Topic Sensors How NIST Learns from Horrific Disaster 06 Aug 2021 5 min read Related Stories Aerospace Type Feature Topic No Antenna Could Survive Europa's Brutal, Radioactive Environment--Until Now Aerospace Type News Topic China Tries To Solve Its Rocket Debris Problem Aerospace Type News Topic Mars-bound Astronauts Might Raid This Zero-G Fridge Type Feature Topic Artificial Intelligence Magazine Fast, Efficient Neural Networks Copy Dragonfly Brains An insect-inspired AI could make missile-defense systems more nimble Frances Chance 30 Jul 2021 12 min read Getty Images/Richard Penska/500px In each of our brains, 86 billion neurons work in parallel, processing inputs from senses and memories to produce the many feats of human cognition. The brains of other creatures are less broadly capable, but those animals often exhibit innate aptitudes for particular tasks, abilities honed by millions of years of evolution. Most of us have seen animals doing clever things. Perhaps your house pet is an escape artist. Maybe you live near the migration path of birds or butterflies and celebrate their annual return. Or perhaps you have marveled at the seeming single-mindedness with which ants invade your pantry Looking to such specialized nervous systems as a model for artificial intelligence may prove just as valuable, if not more so, than studying the human brain. Consider the brains of those ants in your pantry. Each has some 250,000 neurons. Larger insects have closer to 1 million. In my research at Sandia National Laboratories in Albuquerque, I study the brains of one of these larger insects, the dragonfly. I and my colleagues at Sandia, a national-security laboratory, hope to take advantage of these insects' specializations to design computing systems optimized for tasks like intercepting an incoming missile or following an odor plume. By harnessing the speed, simplicity, and efficiency of the dragonfly nervous system, we aim to design computers that perform these functions faster and at a fraction of the power that conventional systems consume. Looking to a dragonfly as a harbinger of future computer systems may seem counterintuitive. The developments in artificial intelligence and machine learning that make news are typically algorithms that mimic human intelligence or even surpass people's abilities. Neural networks can already perform as well--if not better--than people at some specific tasks, such as detecting cancer in medical scans. And the potential of these neural networks stretches far beyond visual processing. The computer program AlphaZero, trained by self-play, is the best Go player in the world. Its sibling AI, AlphaStar, ranks among the best Starcraft II players. Such feats, however, come at a cost. Developing these sophisticated systems requires massive amounts of processing power, generally available only to select institutions with the fastest supercomputers and the resources to support them. And the energy cost is off-putting. Recent estimates suggest that the carbon emissions resulting from developing and training a natural-language processing algorithm are greater than those produced by four cars over their lifetimes. Illustration of a neural network. It takes the dragonfly only about 50 milliseconds to begin to respond to a prey's maneuver. If we assume 10 ms for cells in the eye to detect and transmit information about the prey, and another 5 ms for muscles to start producing force, this leaves only 35 ms for the neural circuitry to make its calculations. Given that it typically takes a single neuron at least 10 ms to integrate inputs, the underlying neural network can be at least three layers deep. But does an artificial neural network really need to be large and complex to be useful? I believe it doesn't. To reap the benefits of neural-inspired computers in the near term, we must strike a balance between simplicity and sophistication. Which brings me back to the dragonfly, an animal with a brain that may provide precisely the right balance for certain applications. If you have ever encountered a dragonfly, you already know how fast these beautiful creatures can zoom, and you've seen their incredible agility in the air. Maybe less obvious from casual observation is their excellent hunting ability: Dragonflies successfully capture up to 95 percent of the prey they pursue, eating hundreds of mosquitoes in a day. The physical prowess of the dragonfly has certainly not gone unnoticed. For decades, U.S. agencies have experimented with using dragonfly-inspired designs for surveillance drones. Now it is time to turn our attention to the brain that controls this tiny hunting machine. While dragonflies may not be able to play strategic games like Go, a dragonfly does demonstrate a form of strategy in the way it aims ahead of its prey's current location to intercept its dinner. This takes calculations performed extremely fast--it typically takes a dragonfly just 50 milliseconds to start turning in response to a prey's maneuver. It does this while tracking the angle between its head and its body, so that it knows which wings to flap faster to turn ahead of the prey. And it also tracks its own movements, because as the dragonfly turns, the prey will also appear to move. The model dragonfly reorients in response to the prey's turning. The model dragonfly reorients in response to the prey's turning. The smaller black circle is the dragonfly's head, held at its initial position. The solid black line indicates the direction of the dragonfly's flight; the dotted blue lines are the plane of the model dragonfly's eye. The red star is the prey's position relative to the dragonfly, with the dotted red line indicating the dragonfly's line of sight. So the dragonfly's brain is performing a remarkable feat, given that the time needed for a single neuron to add up all its inputs--called its membrane time constant--exceeds 10 milliseconds. If you factor in time for the eye to process visual information and for the muscles to produce the force needed to move, there's really only time for three, maybe four, layers of neurons, in sequence, to add up their inputs and pass on information Could I build a neural network that works like the dragonfly interception system? I also wondered about uses for such a neural-inspired interception system. Being at Sandia, I immediately considered defense applications, such as missile defense, imagining missiles of the future with onboard systems designed to rapidly calculate interception trajectories without affecting a missile's weight or power consumption. But there are civilian applications as well. For example, the algorithms that control self-driving cars might be made more efficient, no longer requiring a trunkful of computing equipment. If a dragonfly-inspired system can perform the calculations to plot an interception trajectory, perhaps autonomous drones could use it to avoid collisions. And if a computer could be made the same size as a dragonfly brain (about 6 cubic millimeters), perhaps insect repellent and mosquito netting will one day become a thing of the past, replaced by tiny insect-zapping drones! To begin to answer these questions, I created a simple neural network to stand in for the dragonfly's nervous system and used it to calculate the turns that a dragonfly makes to capture prey. My three-layer neural network exists as a software simulation. Initially, I worked in Matlab simply because that was the coding environment I was already using. I have since ported the model to Python. Because dragonflies have to see their prey to capture it, I started by simulating a simplified version of the dragonfly's eyes, capturing the minimum detail required for tracking prey. Although dragonflies have two eyes, it's generally accepted that they do not use stereoscopic depth perception to estimate distance to their prey. In my model, I did not model both eyes. Nor did I try to match the resolution of a dragonfly eye. Instead, the first layer of the neural network includes 441 neurons that represent input from the eyes, each describing a specific region of the visual field--these regions are tiled to form a 21-by-21-neuron array that covers the dragonfly's field of view. As the dragonfly turns, the location of the prey's image in the dragonfly's field of view changes. The dragonfly calculates turns required to align the prey's image with one (or a few, if the prey is large enough) of these "eye" neurons. A second set of 441 neurons, also in the first layer of the network, tells the dragonfly which eye neurons should be aligned with the prey's image, that is, where the prey should be within its field of view. The figure shows the dragonfly engaging its prey. The model dragonfly engages its prey. Processing--the calculations that take input describing the movement of an object across the field of vision and turn it into instructions about which direction the dragonfly needs to turn--happens between the first and third layers of my artificial neural network. In this second layer, I used an array of 194,481 (21^4) neurons, likely much larger than the number of neurons used by a dragonfly for this task. I precalculated the weights of the connections between all the neurons into the network. While these weights could be learned with enough time, there is an advantage to "learning" through evolution and preprogrammed neural network architectures. Once it comes out of its nymph stage as a winged adult (technically referred to as a teneral), the dragonfly does not have a parent to feed it or show it how to hunt. The dragonfly is in a vulnerable state and getting used to a new body--it would be disadvantageous to have to figure out a hunting strategy at the same time. I set the weights of the network to allow the model dragonfly to calculate the correct turns to intercept its prey from incoming visual information. What turns are those? Well, if a dragonfly wants to catch a mosquito that's crossing its path, it can't just aim at the mosquito. To borrow from what hockey player Wayne Gretsky once said about pucks, the dragonfly has to aim for where the mosquito is going to be. You might think that following Gretsky's advice would require a complex algorithm, but in fact the strategy is quite simple: All the dragonfly needs to do is to maintain a constant angle between its line of sight with its lunch and a fixed reference direction. Readers who have any experience piloting boats will understand why that is. They know to get worried when the angle between the line of sight to another boat and a reference direction (for example due north) remains constant, because they are on a collision course. Mariners have long avoided steering such a course, known as parallel navigation, to avoid collisions [svg] [svg] [svg] These three heat maps show the activity patterns of neurons at the same moment; the first set represents the eye, the second represents those neurons that specify which eye neurons to align with the prey's image, and the third represents those that output motor commands. Translated to dragonflies, which want to collide with their prey, the prescription is simple: keep the line of sight to your prey constant relative to some external reference. However, this task is not necessarily trivial for a dragonfly as it swoops and turns, collecting its meals. The dragonfly does not have an internal gyroscope (that we know of) that will maintain a constant orientation and provide a reference regardless of how the dragonfly turns. Nor does it have a magnetic compass that will always point north. In my simplified simulation of dragonfly hunting, the dragonfly turns to align the prey's image with a specific location on its eye, but it needs to calculate what that location should be. The third and final layer of my simulated neural network is the motor-command layer. The outputs of the neurons in this layer are high-level instructions for the dragonfly's muscles, telling the dragonfly in which direction to turn. The dragonfly also uses the output of this layer to predict the effect of its own maneuvers on the location of the prey's image in its field of view and updates that projected location accordingly. This updating allows the dragonfly to hold the line of sight to its prey steady, relative to the external world, as it approaches. It is possible that biological dragonflies have evolved additional tools to help with the calculations needed for this prediction. For example, dragonflies have specialized sensors that measure body rotations during flight as well as head rotations relative to the body--if these sensors are fast enough, the dragonfly could calculate the effect of its movements on the prey's image directly from the sensor outputs or use one method to cross-check the other. I did not consider this possibility in my simulation. To test this three-layer neural network, I simulated a dragonfly and its prey, moving at the same speed through three-dimensional space. As they do so my modeled neural-network brain "sees" the prey, calculates where to point to keep the image of the prey at a constant angle, and sends the appropriate instructions to the muscles. I was able to show that this simple model of a dragonfly's brain can indeed successfully intercept other bugs, even prey traveling along curved or semi-random trajectories. The simulated dragonfly does not quite achieve the success rate of the biological dragonfly, but it also does not have all the advantages (for example, impressive flying speed) for which dragonflies are known. More work is needed to determine whether this neural network is really incorporating all the secrets of the dragonfly's brain. Researchers at the Howard Hughes Medical Institute's Janelia Research Campus, in Virginia, have developed tiny backpacks for dragonflies that can measure electrical signals from a dragonfly's nervous system while it is in flight and transmit these data for analysis. The backpacks are small enough not to distract the dragonfly from the hunt. Similarly, neuroscientists can also record signals from individual neurons in the dragonfly's brain while the insect is held motionless but made to think it's moving by presenting it with the appropriate visual cues, creating a dragonfly-scale virtual reality. Data from these systems allows neuroscientists to validate dragonfly-brain models by comparing their activity with activity patterns of biological neurons in an active dragonfly. While we cannot yet directly measure individual connections between neurons in the dragonfly brain, I and my collaborators will be able to infer whether the dragonfly's nervous system is making calculations similar to those predicted by my artificial neural network. That will help determine whether connections in the dragonfly brain resemble my precalculated weights in the neural network. We will inevitably find ways in which our model differs from the actual dragonfly brain. Perhaps these differences will provide clues to the shortcuts that the dragonfly brain takes to speed up its calculations. A backpack on a dragonfly This backpack that captures signals from electrodes inserted in a dragonfly's brain was created by Anthony Leonardo, a group leader at Janelia Research Campus.Anthony Leonardo/ Janelia Research Campus/HHMI Dragonflies could also teach us how to implement "attention" on a computer. You likely know what it feels like when your brain is at full attention, completely in the zone, focused on one task to the point that other distractions seem to fade away. A dragonfly can likewise focus its attention. Its nervous system turns up the volume on responses to particular, presumably selected, targets, even when other potential prey are visible in the same field of view. It makes sense that once a dragonfly has decided to pursue a particular prey, it should change targets only if it has failed to capture its first choice. (In other words, using parallel navigation to catch a meal is not useful if you are easily distracted.) Even if we end up discovering that the dragonfly mechanisms for directing attention are less sophisticated than those people use to focus in the middle of a crowded coffee shop, it's possible that a simpler but lower-power mechanism will prove advantageous for next-generation algorithms and computer systems by offering efficient ways to discard irrelevant inputs The advantages of studying the dragonfly brain do not end with new algorithms; they also can affect systems design. Dragonfly eyes are fast, operating at the equivalent of 200 frames per second: That's several times the speed of human vision. But their spatial resolution is relatively poor, perhaps just a hundredth of that of the human eye. Understanding how the dragonfly hunts so effectively, despite its limited sensing abilities, can suggest ways of designing more efficient systems. Using the missile-defense problem, the dragonfly example suggests that our antimissile systems with fast optical sensing could require less spatial resolution to hit a target. The dragonfly isn't the only insect that could inform neural-inspired computer design today. Monarch butterflies migrate incredibly long distances, using some innate instinct to begin their journeys at the appropriate time of year and to head in the right direction. We know that monarchs rely on the position of the sun, but navigating by the sun requires keeping track of the time of day. If you are a butterfly heading south, you would want the sun on your left in the morning but on your right in the afternoon. So, to set its course, the butterfly brain must therefore read its own circadian rhythm and combine that information with what it is observing. Other insects, like the Sahara desert ant, must forage for relatively long distances. Once a source of sustenance is found, this ant does not simply retrace its steps back to the nest, likely a circuitous path. Instead it calculates a direct route back. Because the location of an ant's food source changes from day to day, it must be able to remember the path it took on its foraging journey, combining visual information with some internal measure of distance traveled, and then calculate its return route from those memories. While nobody knows what neural circuits in the desert ant perform this task, researchers at the Janelia Research Campus have identified neural circuits that allow the fruit fly to self-orient using visual landmarks. The desert ant and monarch butterfly likely use similar mechanisms. Such neural circuits might one day prove useful in, say, low-power drones. And what if the efficiency of insect-inspired computation is such that millions of instances of these specialized components can be run in parallel to support more powerful data processing or machine learning? Could the next AlphaZero incorporate millions of antlike foraging architectures to refine its game playing? Perhaps insects will inspire a new generation of computers that look very different from what we have today. A small army of dragonfly-interception-like algorithms could be used to control moving pieces of an amusement park ride, ensuring that individual cars do not collide (much like pilots steering their boats) even in the midst of a complicated but thrilling dance. No one knows what the next generation of computers will look like, whether they will be part-cyborg companions or centralized resources much like Isaac Asimov's Multivac. Likewise, no one can tell what the best path to developing these platforms will entail. While researchers developed early neural networks drawing inspiration from the human brain, today's artificial neural networks often rely on decidedly unbrainlike calculations. Studying the calculations of individual neurons in biological neural circuits--currently only directly possible in nonhuman systems--may have more to teach us. Insects, apparently simple but often astonishing in what they can do, have much to contribute to the development of next-generation computers, especially as neuroscience research continues to drive toward a deeper understanding of how biological neural circuits work. So next time you see an insect doing something clever, imagine the impact on your everyday life if you could have the brilliant efficiency of a small army of tiny dragonfly, butterfly, or ant brains at your disposal. Maybe computers of the future will give new meaning to the term "hive mind," with swarms of highly specialized but extremely efficient minuscule processors, able to be reconfigured and deployed depending on the task at hand. With the advances being made in neuroscience today, this seeming fantasy may be closer to reality than you think. This article appears in the August 2021 print issue as "Lessons From a Dragonfly's Brain." Keep Reading | Show less