https://spectrum.ieee.org/ai-guided-robots-are-ready-to-sort-your-recyclables [ ] IEEE.orgIEEE Xplore Digital LibraryIEEE StandardsMore Sites Sign InJoin IEEE AI-Guided Robots Are Ready to Sort Your Recyclables Share FOR THE TECHNOLOGY INSIDER Search: [ ] Explore by topic AerospaceArtificial IntelligenceBiomedicalComputingConsumer ElectronicsEnergyHistory of TechnologyRoboticsSemiconductorsSensors TelecommunicationsTransportation IEEE Spectrum FOR THE TECHNOLOGY INSIDER Topics AerospaceArtificial IntelligenceBiomedicalComputingConsumer ElectronicsEnergyHistory of TechnologyRoboticsSemiconductorsSensors TelecommunicationsTransportation Sections FeaturesNewsOpinionCareersDIYEngineering Resources More Special ReportsExplainersPodcastsVideosNewslettersTop Programming LanguagesRobots Guide For IEEE Members Current IssueMagazine ArchiveThe InstituteTI Archive For IEEE Members Current IssueMagazine ArchiveThe InstituteTI Archive IEEE Spectrum About UsContact UsReprints & PermissionsAdvertising Follow IEEE Spectrum Support IEEE Spectrum IEEE Spectrum is the flagship publication of the IEEE -- the world's largest professional organization devoted to engineering and applied sciences. Our articles, podcasts, and infographics inform our readers about developments in technology, engineering, and science. Join IEEE Subscribe About IEEEContact & SupportAccessibilityNondiscrimination PolicyTerms IEEE Privacy Policy (c) Copyright 2022 IEEE -- All rights reserved. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy. view privacy policy accept & close Enjoy more free content and benefits by creating an account Saving articles to read later requires an IEEE Spectrum account The Institute content is only available for members Downloading full PDF issues is exclusive for IEEE Members Access to Spectrum's Digital Edition is exclusive for IEEE Members Following topics is a feature exclusive for IEEE Members Adding your response to an article requires an IEEE Spectrum account Create an account to access more content and features on IEEE Spectrum, including the ability to save articles to read later, download Spectrum Collections, and participate in conversations with readers and editors. For more exclusive content and features, consider Joining IEEE. Join the world's largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum's articles, archives, PDF downloads, and other benefits. Learn more - CREATE AN ACCOUNTSIGN IN JOIN IEEESIGN IN Close Access Thousands of Articles -- Completely Free Create an account and get exclusive content and features: Save articles, download collections, and talk to tech insiders -- all free! For full access and benefits, join IEEE as a paying member. CREATE AN ACCOUNTSIGN IN Artificial IntelligenceTopicMagazineTypeFeatureRobotics AI-Guided Robots Are Ready to Sort Your Recyclables Computer-vision systems use shapes, colors, and even labels to identify materials at superhuman speeds Jason Calaiaro 7h 11 min read Vertical An animated image of different elements of trash with different markings overlaying it. The Amp Cortex, a highspeed robotic sorting system guided by artificial intelligence, identifies materials by category on a conveyor belt. To date, systems in operation have recognized more than 50 billion objects in various permutations. AMP Robotics Green It's Tuesday night. In front of your house sits a large blue bin, full of newspaper, cardboard, bottles, cans, foil take-out trays, and empty yogurt containers. You may feel virtuous, thinking you're doing your part to reduce waste. But after you rinse out that yogurt container and toss it into the bin, you probably don't think much about it ever again. The truth about recycling in many parts of the United States and much of Europe is sobering. Tomorrow morning, the contents of the recycling bin will be dumped into a truck and taken to the recycling facility to be sorted. Most of the material will head off for processing and eventual use in new products. But a lot of it will end up in a landfill. --------------------------------------------------------------------- So how much of the material that goes into the typical bin avoids a trip to landfill? For countries that do curbside recycling, the number--called the recovery rate--appears to average around 70 to 90 percent, though widespread data isn't available. That doesn't seem bad. But in some municipalities, it can go as low as 40 percent. What's worse, only a small quantity of all recyclables makes it into the bins--just 32 percent in the United States and 10 to 15 percent globally. That's a lot of material made from finite resources that needlessly goes to waste. We have to do better than that. Right now, the recycling industry is facing a financial crisis, thanks to falling prices for sorted recyclables as well as policy, enacted by China in 2018, which restricts the import of many materials destined for recycling and shuts out most recyclables originating in the United States. There is a way to do better. Using computer vision, machine learning, and robots to identify and sort recycled material, we can improve the accuracy of automatic sorting machines, reduce the need for human intervention, and boost overall recovery rates. My company, Amp Robotics, based in Louisville, Colo., is developing hardware and software that relies on image analysis to sort recyclables with far higher accuracy and recovery rates than are typical for conventional systems. Other companies are similarly working to apply AI and robotics to recycling, including Bulk Handling Systems, Machinex, and Tomra. To date, the technology has been installed in hundreds of sorting facilities around the world. Expanding its use will prevent waste and help the environment by keeping recyclables out of landfills and making them easier to reprocess and reuse. An animated image of different elements of trash with different markings overlaying it.AMP Robotics Before I explain how AI will improve recycling, let's look at how recycled materials were sorted in the past and how they're being sorted in most parts of the world today. When recycling began in the 1960s, the task of sorting fell to the consumer--newspapers in one bundle, cardboard in another, and glass and cans in their own separate bins. That turned out to be too much of a hassle for many people and limited the amount of recyclable materials gathered. In the 1970s, many cities took away the multiple bins and replaced them with a single container, with sorting happening downstream. This "single stream" recycling boosted participation, and it is now the dominant form of recycling in developed countries. Moving the task of sorting further downstream led to the building of sorting facilities. To do the actual sorting, recycling entrepreneurs adapted equipment from the mining and agriculture industries, filling in with human labor as necessary. These sorting systems had no computer intelligence, relying instead on the physical properties of materials to separate them. Glass, for example, can be broken into tiny pieces and then sifted and collected. Cardboard is rigid and light--it can glide over a series of mechanical camlike disks, while other, denser materials fall in between the disks. Ferrous metals can be magnetically separated from other materials; magnetism can also be induced in nonferrous items, like aluminum, using a large eddy current. By the 1990s, hyperspectral imaging, developed by NASA and first launched in a satellite in 1972, was becoming commercially viable and began to show up in the recycling world. Unlike human eyes, which mostly see in combinations of red, green, and blue, hyperspectral sensors divide images into many more spectral bands. The technology's ability to distinguish between different types of plastics changed the game for recyclers, bringing not only optical sensing but computer intelligence into the process. Programmable optical sorters were also developed to separate paper products, distinguishing, say, newspaper from junk mail. So today, much of the sorting is automated. These systems generally sort to 80 to 95 percent purity--that is, 5 to 20 percent of the output shouldn't be there. For the output to be profitable, however, the purity must be higher than 95 percent; below this threshold, the value drops, and often it's worth nothing. So humans manually clean up each of the streams, picking out stray objects before the material is compressed and baled for shipping. Despite all the automated and manual sorting, about 10 to 30 percent of the material that enters the facility ultimately ends up in a landfill. In most cases, more than half of that material is recyclable and worth money but was simply missed. We've pushed the current systems as far as they can go. Only AI can do better. Getting AI into the recycling business means combining pick-and-place robots with accurate real-time object detection. Pick-and-place robots combined with computer vision systems are used in manufacturing to grab particular objects, but they generally are just looking repeatedly for a single item, or for a few items of known shapes and under controlled lighting conditions.Recycling, though, involves infinite variability in the kinds, shapes, and orientations of the objects traveling down the conveyor belt, requiring nearly instantaneous identification along with the quick dispatch of a new trajectory to the robot arm. A photo of a conveyor belt with discarded paper on it and robot gripper grabbing items. A photo of a robotic gripper on a piece of cardboard.AI-based systems guide robotic arms to grab materials from a stream of mixed recyclables and place them in the correct bins. Here, a tandem robot system operates at a Waste Connections recycling facility [top], and a single robot arm [bottom] recovers a piece of corrugated cardboard. The United States does a pretty good job when it comes to cardboard: In 2021, 91.4 percent of discarded cardboard was recycled, according to the American Forest and Paper Association.AMP Robotics My company first began using AI in 2016 to extract empty cartons from other recyclables at a facility in Colorado; today, we have systems installed in more than 25 U.S. states and six countries. We weren't the first company to try AI sorting, but it hadn't previously been used commercially. And we have steadily expanded the types of recyclables our systems can recognize and sort. AI makes it theoretically possible to recover all of the recyclables from a mixed-material stream at accuracy approaching 100 percent, entirely based on image analysis. If an AI-based sorting system can see an object, it can accurately sort it. Consider a particularly challenging material for today's recycling sorters: high-density polyethylene (HDPE), a plastic commonly used for detergent bottles and milk jugs. (In the United States, Europe, and China, HDPE products are labeled as No. 2 recyclables.) In a system that relies on hyperspectral imaging, batches of HDPE tend to be mixed with other plastics and may have paper or plastic labels, making it difficult for the hyperspectral imagers to detect the underlying object's chemical composition. An AI-driven computer-vision system, by contrast, can determine that a bottle is HDPE and not something else by recognizing its packaging. Such a system can also use attributes like color, opacity, and form factor to increase detection accuracy, and even sort by color or specific product, reducing the amount of reprocessing needed. Though the system doesn't attempt to understand the meaning of words on labels, the words are part of an item's visual attributes. We at AMP Robotics have built systems that can do this kind of sorting. In the future, AI systems could also sort by combinations of material and by original use, enabling food-grade materials to be separated from containers that held household cleaners, and paper contaminated with food waste to be separated from clean paper. Training a neural network to detect objects in the recycling stream is not easy. It is at least several orders of magnitude more challenging than recognizing faces in a photograph, because there can be a nearly infinite variety of ways that recyclable materials can be deformed, and the system has to recognize the permutations. Inside the Sorting Center An illustration of inside the Sorting Center. Chris Philpot Today's recycling facilities use mechanical sorting, optical hyperspectral sorting, and human workers. Here's what typically happens after the recycling truck leaves your house with the contents of your blue bin. Trucks unload on a concrete pad, called the tip floor. A front-end loader scoops up material in bulk and dumps it onto a conveyor belt, typically at a rate of 30 to 60 tonnes per hour. The first stage is the presort. Human workers remove large or problematic items that shouldn't have made it onto collection trucks in the first place--bicycles, big pieces of plastic film, propane canisters, car transmissions. An illustration of the process in the sorting line. Sorting machines that rely on optical hyperspectral imaging or human workers separate fiber (office paper, cardboard, magazines--referred to as 2D products, as they are mostly flat) from the remaining plastics and metals. In the case of the optical sorters, cameras stare down at the material rolling down the conveyor belt, detect an object made of the target substance, and then send a message to activate a bank of electronically controllable solenoids to divert the object into a collection bin. An illustration of the process in the sorting line. The nonfiber materials pass through a mechanical system with densely packed camlike wheels. Large items glide past while small items, like that recyclable fork you thoughtfully deposited in your blue bin, slip through, headed straight for landfill--they are just too small to be sorted. Machines also smash glass, which falls to the bottom and is screened out. An illustration of the process in the sorting line. The rest of the stream then passes under overhead magnets, which collect items made of ferrous metals, and an eddy-current-inducing machine, which jolts nonferrous metals to another collection area. An illustration of the process in the sorting line. At this point, mostly plastics remain. More hyperspectral sorters, in series, can pull off plastics one type--like the HDPE of detergent bottles and the PET of water bottles--at a time. Finally, whatever is left--between 10 to 30 percent of what came in on the trucks--goes to landfill. An illustration of the process in the sorting line. In the future, AI-driven robotic sorting systems and AI inspection systems could replace human workers at most points in this process. In the diagram, red icons indicate where AI-driven robotic systems could replace human workers and a blue icon indicates where an AI auditing system could make a final check on the success of the sorting effort. It's hard enough to train a neural network to identify all the different types of bottles of laundry detergent on the market today, but it's an entirely different challenge when you consider the physical deformations that these objects can undergo by the time they reach a recycling facility. They can be folded, torn, or smashed. Mixed into a stream of other objects, a bottle might have only a corner visible. Fluids or food waste might obscure the material. We train our systems by giving them images of materials belonging to each category, sourced from recycling facilities around the world. My company now has the world's largest data set of recyclable material images for use in machine learning. Using this data, our models learn to identify recyclables in the same way their human counterparts do, by spotting patterns and features that distinguish different materials. We continuously collect random samples from all the facilities that use our systems, and then annotate them, add them to our database, and retrain our neural networks. We also test our networks to find models that perform best on target material and do targeted additional training on materials that our systems have trouble identifying correctly. In general, neural networks are susceptible to learning the wrong thing. Pictures of cows are associated with milk packaging, which is commonly produced as a fiber carton or HDPE container. But milk products can also be packaged in other plastics; for example, single-serving milk bottles may look like the HDPE of gallon jugs but are usually made from an opaque form of the PET (polyethylene terephthalate) used for water bottles. Cows don't always mean fiber or HDPE, in other words. There is also the challenge of staying up to date with the continual changes in consumer packaging. Any mechanism that relies on visual observation to learn associations between packaging and material types will need to consume a steady stream of data to ensure that objects are classified accurately. But we can get these systems to work. Right now, our systems do really well on certain categories--more than 98 percent accuracy on aluminum cans--and are getting better at distinguishing nuances like color, opacity, and initial use (spotting those food-grade plastics). Now thatAI-basedsystems are ready to take on your recyclables, how might things change? Certainly, they will boost the use of robotics, which is only minimally used in the recycling industry today. Given the perpetual worker shortage in this dull and dirty business, automation is a path worth taking. AI can also help us understand how well today's existing sorting processes are doing and how we can improve them. Today, we have a very crude understanding of the operational efficiency of sorting facilities--we weigh trucks on the way in and weigh the output on the way out. No facility can tell you the purity of the products with any certainty; they only audit quality periodically by breaking open random bales. But if you placed an AI-powered vision system over the inputs and outputs of relevant parts of the sorting process, you'd gain a holistic view of what material is flowing where. This level of scrutiny is just beginning in hundreds of facilities around the world, and it should lead to greater efficiency in recycling operations. Being able to digitize the real-time flow of recyclables with precision and consistency also provides opportunities to better understand which recyclable materials are and are not currently being recycled and then to identify gaps that will allow facilities to improve their recycling systems overall. Sorting Robot Picking Mixed PlasticsAMP Robotics But to really unleash the power of AI on the recycling process, we need to rethink the entire sorting process. Today, recycling operations typically whittle down the mixed stream of materials to the target material by removing nontarget material--they do a "negative sort," in other words. Instead, using AI vision systems with robotic pickers, we can perform a "positive sort." Instead of removing nontarget material, we identify each object in a stream and select the target material. To be sure, our recovery rate and purity are only as good as our algorithms. Those numbers continue to improve as our systems gain more experience in the world and our training data set continues to grow. We expect to eventually hit purity and recovery rates of 100 percent. The implications of moving from more mechanical systems to AI are profound. Rather than coarsely sorting to 80 percent purity and then manually cleaning up the stream to 95 percent purity, a facility can reach the target purity on the first pass. And instead of having a unique sorting mechanism handling each type of material, a sorting machine can change targets just by a switch in algorithm. The use of AI also means that we can recover materials long ignored for economic reasons. Until now, it was only economically viable for facilities to pursue the most abundant, high-value items in the waste stream. But with machine-learning systems that do positive sorting on a wider variety of materials, we can start to capture a greater diversity of material at little or no overhead to the business. That's good for the planet. We are beginning to see a few AI-based secondary recycling facilities go into operation, with Amp's technology first coming online in Denver in late 2020. These systems are currently used where material has already passed through a traditional sort, seeking high-value materials missed or low-value materials that can be sorted in novel ways and therefore find new markets. Thanks to AI, the industry is beginning to chip away at the mountain of recyclables that end up in landfills each year--a mountain containing billions of tons of recyclables representing billions of dollars lost and nonrenewable resources wasted. This article appears in the July 2022 print issue as "AI Takes a Dumpster Dive ." type:coverartificial intelligenceroboticsrecyclingrecycling robots {"imageShortcodeIds":[]} Jason Calaiaro Jason Calaiaro is director of hardware for AMP Robotics, in Louisville, Colo. Before joining AMP, he founded Marble, now CAT Robotics, where he pioneered robots for last-mile delivery. He also developed aerial transportation drones at Matternet, the first FAA drone airline, and served as chief information officer and director of propulsion at Astrobotic Technology, which is slated to be the first private company to land on the moon in late 2022 The Conversation (0) A photo of a gripper grabbing a piece of cardboard. Artificial IntelligenceTopicMagazineTypeFeatureRobotics AI Can Help Make Recycling Better 7h 2 min read Aerial view of a large office building complex with solar panels on the roof. FeatureTopicMagazineEnergyType How to Hasten India's Transition Away From Coal 19h 9 min read An image of Frontier, the worlds first exascale supercomputer at Oak Ridge National Laboratory in Tennessee, U.S.. ComputingTopicTypeNews The Beating Heart of the World's First Exascale Supercomputer 24 Jun 2022 4 min read RoboticsNewsTypeTopic Video Friday: Remote-Control Burger Crafting Your weekly selection of awesome robot videos Evan Ackerman Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes. 24 Jun 2022 3 min read Two small black quadrupedal robots help each other to practice cooking a hamburger in a fake kitchen in a laboratory video fridayrobotics Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ERF 2022: 28-30 June 2022, ROTTERDAM, NETHERLANDS RoboCup 2022: 11-17 July 2022, BANGKOK IEEE CASE 2022: 20-24 August 2022, MEXICO CITY CLAWAR 2022: 12-14 September 2022, AZORES, PORTUGAL ANA Avatar XPRIZE Finals: 4-5 November 2022, LOS ANGELES CoRL 2022: 14-18 December 2022, AUCKLAND, NEW ZEALAND Enjoy today's videos! The Real Robotics Lab at University of Leeds presents two Chef quadruped robots remotely controlled by a single operator to make a tasty burger as a team. The operator uses a gamepad to control their walking and a wearable motion capture system for manipulation control of the robotic arms mounted on the legged robots. We're told that these particular quadrupeds are vegans, and that the vegan burgers they make are "very delicious." [ Real Robotics ] Thanks Chengxu! Elasto-plastic materials like Play-Doh can be difficult for robots to manipulate. RoboCraft is a system that enables a robot to learn how to shape these materials in just ten minutes. [ MIT ] Thanks, Rachel! State-of-the-art frame interpolation methods generate intermediate frames by inferring object motions in the image from consecutive key-frames. In the absence of additional information, first-order approximations, i.e. optical flow, must be used, but this choice restricts the types of motions that can be modeled, leading to errors in highly dynamic scenarios. Event cameras are novel sensors that address this limitation by providing auxiliary visual information in the blind-time between frames. [ ETH Zurich ] Loopy is a robotic swarm of one Degree-of-Freedom (DOF) agents (i.e., a closed-loop made of 36 Dynamixel servos). Each agent (servo) makes its own local decisions based on interactions with its two neighbors. In this video, Loopy is trying to go from an arbitrary initial shape to a goal shape (Flying WV). [ WVU ] A collaboration between Georgia Tech Robotic Musicianship Group and Avshalom Pollak Dance Theatre. The robotic arms respond to the dancers' movement and to the music. Our goal is for both humans and robots to be surprised and inspired by each other. If successful, both humans and robots will be dancing differently than they did before they met. [ Georgia Tech ] Thanks, Gil! Lingkang Zhang wrote in to share a bipedal robot he's working on. It's 70 centimeters tall, runs ROS, can balance and walk, and costs just US $200! [ YouTube ] Thanks, Lingkang! The private-public partnership with NASA and Redwire will demonstrate the ability of a small spacecraft--OSAM-2 (On-Orbit Servicing, Manufacturing and Assembly)--to manufacture and assemble spacecraft components in low-Earth orbit. [ NASA ] Inspired by fireflies, researchers create insect-scale robots that can emit light when they fly, which enables motion tracking and communication. The ability to emit light also brings these microscale robots, which weigh barely more than a paper clip, one step closer to flying on their own outside the lab. These robots are so lightweight that they can't carry sensors, so researchers must track them using bulky infrared cameras that don't work well outdoors. Now, they've shown that they can track the robots precisely using the light they emit and just three smartphone cameras. [ MIT ] Unboxing and getting started with a TurtleBot 4 robotics learning platform with Maddy Thomson, Robotics Demo Designer from Clearpath Robotics. [ Clearpath ] We present a new gripper and exploration approach that uses a finger with very low reflected inertia for probing and then grasping objects. The finger employs a transparent transmission, resulting in a light touch when contact occurs. Experiments show that the finger can safely move faster into contacts than industrial parallel jaw grippers or even most force-controlled grippers with backdrivable transmissions. This property allows rapid proprioceptive probing of objects. [ Stanford BDML ] This is very, very water resistant. I'm impressed. [ Unitree ] I have no idea why Pepper is necessary here, but I do love that this ice cream shop is named Quokka. [ Quokka ] Researchers at ETH Zurich have developed a wearable textile exomuscle that serves as an extra layer of muscles. They aim to use it to increase the upper body strength and endurance of people with restricted mobility. [ ETH Zurich ] VISTA is a data-driven, photorealistic simulator for autonomous driving. It can simulate not just live video but LiDAR data and event cameras, and also incorporate other simulated vehicles to model complex driving situations. [ MIT CSAIL ] In the second phase of the ANT project, the hexapod CREX and the quadruped Aliengo are traversing rough terrain to show their terrain adaption capabilities. [ DFKI ] Here are some satisfying food-service robot videos from FOOMA, a trade show in Japan. robitsuto CUTR retasunoXin Ba ki #FOOMAJAPAN2022www.youtube.com densouebuBu Ding Xing &Rou Ruan Wu Qu riXi i #FOOMAJAPAN2022www.youtube.com arutei Fondly Bian Dang Sheng Fu #FOOMAJAPAN2022www.youtube.com [ Kazumichi Moriyama ] Keep Reading |Show less BiomedicalTopicNewsTypeHistory of Technology The Fatal Flaw of the Pulse Oximeter Racial bias led to faulty product design that led to its inability to work properly with melanin-rich skin Rebecca Sohn Rebecca Sohn is a freelance science journalist. Her work has appeared in Live Science, Slate, and Popular Science, among others. She has been an intern at STAT and at CalMatters, as well as a science fellow at Mashable. 24 Jun 2022 4 min read A Black hospital patient with a pulse oximeter on their finger iStockphoto biasracial diversitypulse oximetryproduct designmedical technology If someone is seeking medical care, the color of their skin shouldn't matter. But, according to new research, pulse oximeters' performance and accuracy apparently hinges on it. Inaccurate blood-oxygen measurements, in other words, made by pulse oximeters have had clear consequences for people of color during the COVID-19 pandemic. "That device ended up being essentially a gatekeeper for how we treat a lot of these patients," said Dr. Tianshi David Wu, an assistant professor of medicine at Baylor College of Medicine, in Houston, and one of the authors of the study. For decades, scientists have found that pulse oximeters, devices that estimate blood-oxygen saturation, can be affected by a person's skin color. In 2021, the FDA issued a warning about this limitation of pulse oximeters. The agency says it plans to hold a meeting on pulse oximeters later this year. Because low oxygen saturation, called hypoxemia, is a common symptom of COVID-19, low blood-oxygen levels qualify patients to receive certain medications. In the first study to examine this issue among COVID-19 patients, published in JAMA Internal Medicinein May, researchers found that the inaccurate measurements resulted in a "systemic failure," delaying care for many Black and Hispanic patients, and in some cases, preventing them from receiving proper medications. The study adds a growing sense of urgency to an issue raised decades ago. "We found that in Black and Hispanic patients, there was a significant delay in identifying severe COVID compared to white patients." --Dr. Ashraf Fawzy, Johns Hopkins University Pulse oximeters work by passing light through part of the body, usually a finger. These devices infer a patient's blood-oxygen saturation (that is, the percentage of hemoglobin carrying oxygen) from the absorption of light by hemoglobin, the pigment in blood that carries oxygen. In theory, pulse oximeters shouldn't be affected by anything other than the levels of oxygen in the blood. But research has shown otherwise. "If you have melanin, which is the pigment that's responsible for skin color...that could potentially affect the transmittance of the light going through the skin," said Govind Rao, a professor of engineering and director of the Center for Advanced Sensor Technology at the University of Maryland, Baltimore County, who was not involved in the study. To examine how patients with COVID-19 were affected by this flaw in pulse oximeters, researchers used data from over 7,000 COVID-19 patients in the Johns Hopkins hospital system, which includes five hospitals, between March 2020 and November 2021. In the first part of the study, researchers compared blood-oxygen saturation for the 1,216 patients who had measurements taken using both a pulse oximeter and arterial blood-gas analysis, which determines the same measure using a direct analysis of blood. The researchers found that the pulse oximeter overestimated blood-oxygen saturation by an average of 1.7 percent for Asian patients, 1.2 percent for Black patients, and 1.1 percent for Hispanic patients. Then, the researchers used these results to create a statistical model to estimate what the arterial blood-gas measurements would be for patients with only pulse-oximeter measurements. Because arterial blood gas requires a needle to be inserted into an artery to collect the blood, most patients only have a pulse-oximeter measurement. To qualify for COVID-19 treatment with remdesivir, an antiviral drug, and dexamethasone, a steroid, patients had to have a blood-oxygen saturation of 94 percent or less. Based on the researchers' model, nearly 30 percent of the 6,673 patients about whom they had enough information to predict their arterial blood-gas measurements met this cutoff. Many of these patients, most of whom were Black or Hispanic, had their treatment delayed for between 5 and 7 hours, with Black patients being delayed on average 1 hour more than white patients. "We found that in Black and Hispanic patients, there was a significant delay in identifying severe COVID compared to white patients," said Dr. Ashraf Fawzy, assistant professor of medicine at Johns Hopkins University and an author of the study. There were 451 patients who never qualified for treatments but that the researchers predicted likely should have; 55 percent were Black, while 27 percent were Hispanic. The study "shows how urgent it is to move away from pulse [oximeters]," said Rao, and to find alternatives ways of measuring blood-oxygen saturation. Studies finding that skin color can affect pulse oximeters go back as far as the 1980s. Despite knowledge of the issue, there are few ways of addressing it. Wu says increasing awareness helps, and that it also may be helpful to do more arterial blood-gas analyses. A long-term solution will require changing the technology, either by using a different method entirely or having devices that can better adjust results to account for differences in skin color. One technological alternative is having devices that measure oxygen diffusing across the skin, called transdermal measurement, which Rao's lab is working on developing. The researchers said one limitation of their study involved the way patients race was self-identified--meaning a wide range of skin pigmentation could be represented in each of the sample groups, depending on how each patient self-identified. The researchers also did not measure how delaying or denying treatment affected the patients clinically, for instance how likely they were to die, how sick they were, or how long they were sick. The researchers are currently working on a study examining these additional questions and factors. Although the problem of the racial bias of pulse oximeters has no immediate solution, said the researchers, they are confident the primary hurdle is not technological. "We do believe that technology exists to fix this problem, and that would ultimately be the most equitable solution for everybody," said Wu. From Your Site Articles * How to Design A Better Pulse Oximeter Free White Paper > * Should You Trust Apple's New Blood Oxygen Sensor? - IEEE ... > Related Articles Around the Web * FDA issues alert on the 'limitations' of pulse oximeters > * Racial Bias in Pulse Oximetry Measurement | NEJM > Keep Reading |Show less Artificial IntelligenceTopicTypeTransportationSponsored Article AI Tool for COVID Monitoring Offers Solution for Urban Congestion Researchers at NYU have developed an AI solution that can leverage public video feeds to better inform decision makers Dexter Johnson Dexter Johnson is a contributing editor at IEEE Spectrum, with a focus on nanotechnology. 09 Jun 2022 7 min read C2SMART Center/New York University congestiontrafficsmart citiesNew York Cityc2smartCOVID-19nyu tandon This is a sponsored article brought to you by NYU's Tandon School of Engineering. In the midst of the COVID-19 pandemic, in 2020, many research groups sought an effective method to determine mobility patterns and crowd densities on the streets of major cities like New York City to give insight into the effectiveness of stay-at-home and social distancing strategies. But sending teams of researchers out into the streets to observe and tabulate these numbers would have involved putting those researchers at risk of exposure to the very infection the strategies were meant to curb. Researchers at New York University's (NYU) Connected Cities for Smart Mobility towards Accessible and Resilient Transportation (C2SMART) Center, a Tier 1 USDOT-funded University Transportation Center, developed a solution that not only eliminated the risk of infection to researchers, and which could easily be plugged into already existing public traffic camera feeds infrastructure, but also provided the most comprehensive data on crowd and traffic densities that had ever been compiled previously and cannot be easily detected by conventional traffic sensors. To accomplish this, C2SMART researchers leveraged publicly available New York City Department of Transportation (DOT) video feeds from the cover over 700 locations throughout New York City and applied a deep-learning, camera-based object detection method that enabled researchers to calculate pedestrian and traffic densities without ever needing to go out onto the streets. "Our idea was to take advantage of these DOT camera feeds and record them so we could better understand social distancing behavior of pedestrians," said Kaan Ozbay, Director of C2SMART and Professor at NYU. To do this, Ozbay and his team wrote a "crawler"--essentially a tool to index the video content automatically--to capture the low-quality images from the video feeds available on the internet. They then used an off-the-shelf deep-learning image-processing algorithm to process each frame of the video to learn what each frame contains: a bus, a car, a pedestrian, a bicycle, etc. The system also blurs out any identifying images such as faces, without impacting the effectiveness of the algorithm. The system developed by the NYU team can help inform decision-makers' understanding of a wide-range of questions ranging from crisis management responses such as social distancing behaviors to traffic congestion "This allows us to identify what is in the frame to determine the relationship between the objects in that frame," said Ozbay. "Then, based on a new method that obviates the need for actual in-situ referencing we devised, we're able to accurately measure the distance between people in the frame to see if they are too close to each other, or it's just too crowded." The easy thing would have been to just count how many people were within each frame. However, as Jingqin Gao, Senior Research Associate at NYU, explained, the reason they pursued an object detection method rather than mere enumeration is because the public feed is not continuous, with gaps lasting several seconds throughout the feed. "Instead of trying to very accurately count pedestrians crossing a line, we are trying to understand pedestrian density in urban environments, especially for those places that are typically crowded, like bus stops and crosswalks," said Gao. "We wanted to know whether they were changing their behavior amid the pandemic." Gao explained that the aim was to determine the pedestrian density and pedestrian social distancing patterns at scale and see how those patterns have changed since pre-COVID conditions, instead of tracking individual pedestrians. "For instance, we wanted to know if there was a change from pre-COVID when people were going out in the early morning for commuting purposes versus during the lockdown when they might be going out later in the afternoon," she added. "By exploring these different trends, we were trying to better understand if there are new patterns during and after the lockdown." Diagram showing camera data acquisition and pedestrian detection framework Camera data acquisition and pedestrian detection framework. C2SMART Center/New York University In general, these kinds of short count studies in traffic engineering only cover a few hours over several days, according to Ozbay. In those studies, people go out and collect data, and then they process it manually, even sometimes having to count cars by hand, for example. But this method would be impossible at the scale of C2SMART's work, Ozbay explained; in order to cover the hundreds of locations with 24-hour coverage over many months, the job has to be performed by an artificial intelligence (AI) algorithm instead of human or conventional traffic counters. There are complications that the AI has to overcome from each video feed: the locations are different, the camera angles and height are different, and they are subject to different lighting and positional factors. "It's not like the AI can learn just one intersection and automatically apply it to another one. It needs to learn each intersection individually," added Ozbay. To enable this AI solution, the C2SMART researchers started with an object detection model, namely You Only Look Once (YOLO), which is pre-trained using Microsoft's COCO data set. Gao explained that they also retrained and localized this object detection model with additional images and various customized post-processing filters to compensate for the low-resolution image produced by New York City DOT video feeds. \u200bScreenshot of the COVID-19 Data Dashboard Screenshot of the COVID-19 Data Dashboard created as part of the project. C2SMART Center/New York University While the off-the-shelf object detection model could work in this instance with some customization, when it came to measuring the distances between the objects, the NYU researchers had to develop a novel algorithm, which they refer to as a reference-free distance approximation algorithm. "If you're measuring something from an image, you may need some reference point," said Gao. "Historically, researchers might need to actually go to the site and measure the distance. But with our methodology, we can use the pixel size on the image of the person and the real height of that person to determine distance." While this project was inspired by the COVID-19 pandemic, the fast-moving nature of the disease precluded these findings from significantly impacting New York City's COVID policies. However, the project has produced a COVID-19 Data Dashboard and a video of how it was developed and operates is provided below. Ozbay explained that the project demonstrated to several city agencies that they were sitting on very valuable actionable data that could be used for many different purposes. "City agencies have approached us on several projects that are related to this one, but in a different context," said Ozbay. "Now we are working with New York's Department of Design and Construction (DDC) and the DOT to use the same kind of approach to analyze traffic around work zones and other key facilities such as intersections and on-street parking without them needing to actually go out to those locations." Ozbay notes that this initial project for COVID-19 has opened up possibilities for this kind of AI algorithm analysis of video feed data to be applied to a wide range of projects to provide critical understanding in a more efficient way. Diagram of object detection using cameras Potential applications of AI-powered object detection using city cameras. C2SMART Center/New York University Ozbay believes that much of the process NYU has developed can be handled internally by IT experts within their organization. For example, they should be able to handle the data acquisition and saving of it. But Ozbay believes that on the AI issues they will likely need to lean on experts within the academic or commercial realm to help them with this, since AI is always in a state of development, on a nearly monthly basis. "This solution will never become like Microsoft Word," said Ozbay. "It will always require some improvements and some changes and tweaking for the foreseeable future." Gao, who used to work for the DOT before taking on her current role, added that there's always a steady stream of commercial entities offering the DOT their product suites. "These commercial solutions frequently recommend buying and installing new cameras," she said. "What we have demonstrated here is that we can provide a solution based on current infrastructure." Based on his experience working with other cities and states throughout his career, Ozbay mentioned that most cities throughout the United States employ similar kinds of traffic camera systems used in New York City. "This method allows for cities throughout the country to provide a dual or triple usage of their existing infrastructure," said Ozbay. "There are a lot of opportunities to do this at a large scale for extended periods with little to no infrastructure cost." Eight images showing city scenes with cars and people that are identified by object recognition Example of other smart city use cases using this research framework: (a) detecting parking occupancy; (b) monitoring bus lane usage; (c) identifying illegal parking/double parking, (d) tracking and counting vehicles; and (e) using pedestrian density info at bus stops to assess transit demand. C2SMART Center/New York University Ozbay hopes the success of the technology will lead to other DOTs across the country learning of the technology and taking an interest in adopting it themselves. "If you can make it in New York, you can make it anywhere," he quipped. "We'll be happy to share with them our code and anything that may be of value to them from our experience." While the final product of this research may change the way traffic information has collected and used, it has also served as an important training tool for NYU students--not just postdoctoral researchers, but two undergraduate students at NYU's Tandon School of Engineering as well. "Our aim as an engineering school is not just to write papers, but to develop products that can be commercialized, and also to train the next generation of engineers on real projects where they can see how engineering contributes to and can help improve society," said Ozbay. Gao and Ozbay added that the two undergraduate students who worked on this project for two years are going on to graduate school to study along the lines of this project. "These students come to us without much knowledge, they become exposed to different research, and we let them pick what they are interested in. We train them very slowly," said Ozbay. "If they remain interested, they eventually become part of our research team." In future research, Ozbay envisions their work moving from just object recognition to building trajectories from these video feeds. If they are successful in this goal, Ozbay believes it has huge implications for applications like real-time traffic safety, an emerging area of research C2SMART is a major player. He added: "With trajectory building we can see the movement of vehicles in relation to each other as well as to pedestrians. This will not only help us identify risks in real-time but also establish and implement measures to mitigate those risks using advanced versions of methods we have already developed in the past." This research utilizes real-time public traffic camera information, which is publicly streamed by the New York City Department of Transportation (DOT) through a publicly open web site at https:// nyctmc.org/. Additional offline video data was provided by New York City DOT and New York City Department of Design and Construction (DDC) under the Memorandum of Understanding ("MOU") between the City of New York, acting by and through DDC and DOT, and C2SMART, a center within New York University. From Your Site Articles * Reimagining Public Buses - IEEE Spectrum > * NYU Researchers Paving New Path for Robotics - IEEE Spectrum > * NYU Researchers Pave the Way for Future Shared Mobility - IEEE ... > Related Articles Around the Web * AI - C2SMART Home > * C2SMART - C2SMART Home > * Connected Cities with Smart Transportation (C2SMART) | NYU ... > Keep Reading |Show less {"imageShortcodeIds":["29948834"]} Trending Stories The most-read stories on IEEE Spectrum right now TopicMagazineTypeSensorsHands On DIY Gamma-Ray Spectroscopy with a Raspberry Pi Pico 22 Jun 2022 5 min read A hand places a tube protruding from a square circuit board against a tea cup. ComputingTopicTypeNews The Beating Heart of the World's First Exascale Supercomputer 24 Jun 2022 4 min read An image of Frontier, the worlds first exascale supercomputer at Oak Ridge National Laboratory in Tennessee, U.S.. The InstituteTopicMagazineTypeProfile Stop Calling Everything AI, Machine-Learning Pioneer Says 31 Mar 2021 6 min read BiomedicalTopicNewsTypeHistory of Technology The Fatal Flaw of the Pulse Oximeter 24 Jun 2022 4 min read A Black hospital patient with a pulse oximeter on their finger TopicNewsTypeSemiconductors Why Graphyne Isn't Graphene 2.0 23 Jun 2022 3 min read Illustration of graphyne structure History of TechnologyTopicInterviewType The Magnet That Made the Modern World 21 Jun 2022 11 min read A cube made up of silver spheres. RoboticsNewsTypeTopic Video Friday: Remote-Control Burger Crafting 24 Jun 2022 3 min read Two small black quadrupedal robots help each other to practice cooking a hamburger in a fake kitchen in a laboratory TelecommunicationsConsumer ElectronicsTopicNewsType Neuronlike Memristors Could Superspeed 6G Wireless 16 Jun 2022 3 min read Conceptual rendering of a device of molybdenum disulfide sandwiched between gold electrodes. Wiggly lines and light emit from it.