https://www.popsci.com/technology/ai-warning-critics/ Join PopSci+ to read science's greatest stories. It's just $1 per month >> [ ] Popular Science JOIN [PopSciPLUS] LOG IN [ ] * Channels ( ) + PopSci+ + Science ( ) o Archaeology o Ask Us Anything o Biology o Dinosaurs o Physics o Space + Technology ( ) o AI o Aviation o Best of What's New o Engineering o Internet o Military o Robots o Security o Vehicles + Environment ( ) o Agriculture o Animals o Climate Change o Conservation o Energy o Sustainability o Weather + DIY ( ) o Life Skills o Projects o Tech Hacks + Health ( ) o Diseases o Fitness & Exercise o Medicine o Nutrition o Psychology + Gear ( ) o Audio o Cameras o Computers o Fitness Gear o Gaming o Gift Guides o Home o Home Theater o Outdoor Gear o Phones o Tablets o Wearables * More ( ) + Merch + Newsletter Sign-Up + PopSci Shop + Podcasts + Video * FIND US ON ( ) + Social ( ) o Facebook o Twitter o LinkedIn o Instagram o Pinterest o Youtube + Flipboard + Apple News+ + RSS SOCIAL PopSci + Popsci+ * Subscribe * Log in * Science + Archaeology + Ask Us Anything + Biology + Dinosaurs + Physics + Space * Technology + AI + Aviation + Best of What's New + Engineering + Internet + Military + Robots + Security + Vehicles * Environment + Agriculture + Animals + Climate Change + Conservation + Energy + Sustainability + Weather * DIY + Life Skills + Projects + Tech Hacks * Health + Diseases + Fitness & Exercise + Medicine + Nutrition + Psychology * Gear + Audio + Cameras + Computers + Fitness Gear + Gaming + Gift Guides + Home + Home Theater + Outdoor Gear + Phones + Tablets + Wearables * Newsletter Sign-Up * MERCH * PopSci + * Technology * Security Big Tech's latest AI doomsday warning might be more of the same hype On Tuesday, a group including AI's leading minds proclaimed that we are facing an 'extinction crisis.' By Andrew Paul | Published May 31, 2023 10:00 AM EDT * Technology Critics say current harms of AI include amplifying algorithmic harm, profiting from exploited labor and stolen data, and fueling climate collapse with resource consumption. Critics say current harms of AI include amplifying algorithmic harm, profiting from exploited labor and stolen data, and fueling climate collapse with resource consumption. Photo by Jaap Arriens/NurPhoto via Getty Images SHARE * * * * Over 350 AI researchers, ethicists, engineers, and company executives co-signed a 22-word, single sentence statement about artificial intelligence's potential existential risks for humanity. Compiled by the nonprofit organization Center for AI Safety, a consortium including the "Godfather of AI," Geoffrey Hinton, OpenAI CEO Sam Altman, and Microsoft Chief Technology Officer Kevin Scott agree that, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." The 22-word missive and its endorsements echo a similar, slightly lengthier joint letter released earlier this year calling for a six-month "moratorium" on research into developing AI more powerful than OpenAI's GPT-4. Such a moratorium has yet to be implemented. [Related: There's a glaring issue with the AI moratorium letter.] Speaking with The New York Times on Tuesday, Center for AI Safety's executive director Dan Hendrycks described the open letter as a "coming out" for some industry leaders. "There's a very common misconception, even in the AI community, that there only are a handful of doomers. But, in fact, many people privately would express concerns about these things," added Hendrycks. But critics remain wary of both the motivations behind such public statements, as well as their feasibility. "Don't be fooled: it's self-serving hype disguised as raising the alarm," says Dylan Baker, a research engineer at the Distributed AI Research Institute (DAIR), an organization promoting ethical AI development. Speaking with PopSci, Baker went on to argue that the current discussions regarding hypothetical existential risks distract the public and regulators from "the concrete harms of AI today." Such harms include "amplifying algorithmic harm, profiting from exploited labor and stolen data, and fueling climate collapse with resource consumption." A separate response first published by DAIR following March's open letter and re-upped on Tuesday, the group argues, "The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices." Hendrycks, however, believes that "just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well." Hendrycks likened the moment to when atomic scientists warned the world about the technologies they created before quoting J. Robert Oppenheimer, "We knew the world would not be the same." [Related: OpenAI's newest ChatGPT update can still spread conspiracy theories.] "They are essentially saying 'hold me back!' media and tech theorist Douglas Rushkoff wrote in an essay published on Tuesday. He added that a combination of "hype, ill-will, marketing, and paranoia" is fueling AI coverage, and hiding the technology's very real, demonstrable issues while companies attempt to consolidate their holds on the industry. "It's just a form of bluffing," he wrote, "Sorry, but I'm just not buying it." In a separate email to PopSci, Rushkoff summarized his thoughts, "If I had to make a quote proportionately short to their proclamation, I'd just say: They mean well. Most of them." Andrew Paul Andrew Paul Andrew Paul is Popular Science's staff writer covering tech news. Previously, he was a regular contributor to The A.V. Club and Input, and has had recent work also featured by Rolling Stone, Fangoria, GQ, Slate, NBC, as well as McSweeney's Internet Tendency. He lives outside Indianapolis. AI Security culture news policy Like science, tech, and DIY projects? Sign up to receive Popular Science's emails and get the highlights. LET'S GO Popular Science Links * Home * About Popular Science * Newsletter Sign-up * How We Test & Review Products * Editorial Standards * Contact * Privacy Policy * Terms & Conditions * Sitemap * DepositPhotos * Affiliate Disclosure Cookie Settings Follow us * * * * * DISCLAIMER(S) Articles may contain affiliate links which enable us to share in the revenue of any purchases made. Registration on or use of this site constitutes acceptance of our Terms of Service. (c) 2023 Recurrent. All rights reserved.