https://theintercept.com/2024/01/12/open-ai-military-ban-chatgpt/ Search for: [ ] [Search] * Politics * Justice * National Security * World * Technology * Environment * Special Investigations * Voices * Podcasts * Videos * Documents Become A Member * About * Policies And Reports * Become a Source * Join Newsletter * * * * * * * * * (c) THE INTERCEPT ALL RIGHTS RESERVED Terms of Use Privacy ANKARA, TURKIYE - SEPTEMBER 05: In this photo illustration, OpenAI logo is being displayed on a mobile phone screen in front of computer screen with the logo of ChatGPT on September 5, 2023 in Ankara, Turkiye. (Photo by Didem Mente/Anadolu Agency via Getty Images) Donate Become a member OpenAI Quietly Deletes Ban on Using ChatGPT for "Military and Warfare" OpenAI logo displayed on a mobile phone screen in front of computer screen on Sept. 5, 2023 in Ankara, Turkey. Photo: Didem Mente/Anadolu Agency via Getty Images OpenAI Quietly Deletes Ban on Using ChatGPT for "Military and Warfare" The Pentagon has its eye on the leading AI company, which this week softened its ban on military use. [select-] Sam Biddle January 12 2024, 2:07 p.m. Donate ANKARA, TURKIYE - SEPTEMBER 05: In this photo illustration, OpenAI logo is being displayed on a mobile phone screen in front of computer screen with the logo of ChatGPT on September 5, 2023 in Ankara, Turkiye. (Photo by Didem Mente/Anadolu Agency via Getty Images) OpenAI logo displayed on a mobile phone screen in front of computer screen on Sept. 5, 2023 in Ankara, Turkey. Photo: Didem Mente/Anadolu Agency via Getty Images OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used. Up until January 10, OpenAI's "usage policies" page included a ban on "activity that has high risk of physical harm, including," specifically, "weapons development" and "military and warfare." That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policy retains an injunction not to "use our service to harm yourself or others" and gives "develop or use weapons" as an example, but the blanket ban on "military and warfare" use has vanished. The unannounced redaction is part of a major rewrite of the policy page, which the company said was intended to make the document "clearer" and "more readable," and which includes many other substantial language and formatting changes. Most Read [GettyImage] Coverage of Gaza War in the New York Times and Other Major Newspapers Heavily Favored Israel, Analysis Shows Adam Johnson, Othman Ali [GettyImage] 77 Groups Worldwide Back Genocide Lawsuit Against Biden in U.S. Court Prem Thakker [GettyImage] Undercover FBI Agents Helped Autistic Teen Plan Trip to Join ISIS Murtaza Hussain "We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs," OpenAI spokesperson Niko Felix said in an email to The Intercept. "A principle like 'Don't harm others' is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples." Felix declined to say whether the vaguer "harm" ban encompassed all military use, writing, "Any use of our technology, including by the military, to '[develop] or [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system,' is disallowed." "OpenAI is well aware of the risk and harms that may arise due to the use of their technology and services in military applications," said Heidy Khlaaf, engineering director at the cybersecurity firm Trail of Bits and an expert on machine learning and autonomous systems safety, citing a 2022 paper she co-authored with OpenAI researchers that specifically flagged the risk of military use. Khlaaf added that the new policy seems to emphasize legality over safety. "There is a distinct difference between the two policies, as the former clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law," she said. "Developing weapons, and carrying out activities related to military and warfare is lawful to various extents. The potential implications for AI safety are significant. Given the well-known instances of bias and hallucination present within Large Language Models (LLMs), and their overall lack of accuracy, their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties." The real-world consequences of the policy are unclear. Last year, The Intercept reported that OpenAI was unwilling to say whether it would enforce its own clear "military and warfare" ban in the face of increasing interest from the Pentagon and U.S. intelligence community. "Given the use of AI systems in the targeting of civilians in Gaza, it's a notable moment to make the decision to remove the words 'military and warfare' from OpenAI's permissible use policy," said Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission. "The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement." Join Our Newsletter Original reporting. Fearless journalism. Delivered to you. I'm in While nothing OpenAI offers today could plausibly be used to directly kill someone, militarily or otherwise -- ChatGPT can't maneuver a drone or fire a missile -- any military is in the business of killing, or at least maintaining the capacity to kill. There are any number of killing-adjacent tasks that a LLM like ChatGPT could augment, like writing code or processing procurement orders. A review of custom ChatGPT-powered bots offered by OpenAI suggests U.S. military personnel are already using the technology to expedite paperwork. The National Geospatial-Intelligence Agency, which directly aids U.S. combat efforts, has openly speculated about using ChatGPT to aid its human analysts. Even if OpenAI tools were deployed by portions of a military force for purposes that aren't directly violent, they would still be aiding an institution whose main purpose is lethality. Experts who reviewed the policy changes at The Intercept's request said OpenAI appears to be silently weakening its stance against doing business with militaries. "I could imagine that the shift away from 'military and warfare' to 'weapons' leaves open a space for OpenAI to support operational infrastructures as long as the application doesn't directly involve weapons development narrowly defined," said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. "Of course, I think the idea that you can contribute to warfighting platforms while claiming not to be involved in the development or use of weapons would be disingenuous, removing the weapon from the sociotechnical system - including command and control infrastructures - of which it's part." Suchman, a scholar of artificial intelligence since the 1970s and member of the International Committee for Robot Arms Control, added, "It seems plausible that the new policy document evades the question of military contracting and warfighting operations by focusing specifically on weapons." Suchman and Myers West both pointed to OpenAI's close partnership with Microsoft, a major defense contractor, which has invested $13 billion in the LLM maker to date and resells the company's software tools. [GettyImages-1240599370-AI-gamechanger-defense-department-budg] Related Pentagon's Budget Is So Bloated That It Needs an AI Program to Navigate It The changes come as militaries around the world are eager to incorporate machine learning techniques to gain an advantage; the Pentagon is still tentatively exploring how it might use ChatGPT or other large-language models, a type of software tool that can rapidly and dextrously generate sophisticated text outputs. LLMs are trained on giant volumes of books, articles, and other web data in order to approximate human responses to user prompts. Though the outputs of an LLM like ChatGPT are often extremely convincing, they are optimized for coherence rather than a firm grasp on reality and often suffer from so-called hallucinations that make accuracy and factuality a problem. Still, the ability of LLMs to quickly ingest text and rapidly output analysis -- or at least the simulacrum of analysis -- makes them a natural fit for the data-laden Defense Department. While some within U.S. military leadership have expressed concern about the tendency of LLMs to insert glaring factual errors or other distortions, as well as security risks that might come with using ChatGPT to analyze classified or otherwise sensitive data, the Pentagon remains generally eager to adopt artificial intelligence tools. In a November address, Deputy Secretary of Defense Kathleen Hicks stated that AI is "a key part of the comprehensive, warfighter-centric approach to innovation that Secretary [Lloyd] Austin and I have been driving from Day 1," though she cautioned that most current offerings "aren't yet technically mature enough to comply with our ethical AI principles." Last year, Kimberly Sablon, the Pentagon's principal director for trusted AI and autonomy, told a conference in Hawaii that "[t]here's a lot of good there in terms of how we can utilize large-language models like [ChatGPT] to disrupt critical functions across the department." Contact the author: Head shot of Sam Biddle Sam Biddle sam.biddle@​theintercept.com +1 978 261 7389 on Signal @sambiddle.bsky.social on Bluesky @samfbiddle on X Related US Secretary of Defense Lloyd Austin is seen on a monitor while testifying during a House Appropriations Subcommittee on Defense hearing regarding the 2023 budget request for the Department of Defense, on Capitol Hill in Washington, DC, on May 11, 2022. (Photo by Brendan Smialowski / AFP) (Photo by BRENDAN SMIALOWSKI/AFP via Getty Images) Pentagon's Budget Is So Bloated That It Needs an AI Program to Navigate It OpenAI website displayed on a laptop screen and OpenAI logo displayed on a phone screen are seen in this illustration photo taken in Krakow, Poland on May 4, 2023. (Photo by Jakub Porzycki/NurPhoto via Getty Images) Can the Pentagon Use ChatGPT? OpenAI Won't Answer. [DALL] The Internet's New Favorite AI Proposes Torturing Iranians and Surveilling Mosques [project-nimbus-google] Documents Reveal Advanced AI Tools Google Is Selling to Israel Latest Stories THE HAGUE, NETHERLANDS - JANUARY 12: Barrister Malcolm Shaw speaks on behalf of the Israeli delegation during today's hearings of Israel's point of view after South Africa has requested the International Court of Justice to indicate measures concerning alleged violations of human rights by Israel in the Gaza Strip on January 12, 2024 in The Hague, Netherlands. On January 11 and January 12 at the International Court of Justice (ICJ), the judicial body of the United Nations, in The Hague, South Africa seized the ICJ, to ask it to rule on possible acts of "genocide" in the Gaza Strip by Israel. (Photo by Michel Porro/Getty Images) Israel's War on Gaza At The Hague, Israel Mounted a Defense Based in an Alternate Reality Jeremy Scahill - 4:53 pm Israel's rebuttal against charges of genocide was as weak in offering documented facts as South Africa's case was powerful. (EDITORS NOTE: Image depicts death.) Mourners at the funeral of Al Jazeera cameraman Samer Abu Daqqa in the center of Khan Younis, Gaza Strip, on Saturday, Dec. 16, 2023. On Monday, Gaza's Hamas-run health authorities put the death toll in Gaza at more than 19,400 Palestinians. Photographer: Ahmad Salem/Bloomberg via Getty Images Israel's War on Gaza Israel Bombed an Al Jazeera Cameraman -- and Blocked Evacuation Efforts as He Bled to Death Sharif Abdel Kouddous - 1:57 pm A new, in-depth timeline of efforts to help Samer Abu Daqqa reveals that Israel was repeatedly pressed to allow for his rescue, but kept emergency crews at bay for hours. A Palestinian child, holding empty pot, wait to receive food distributed by volunteers for Palestinian families ,displaced to Southern Gaza due to Israeli attacks, in Rafah, Gaza on Dec. 22, 2023. Deconstructed Podcast No Safe Place in Gaza Deconstructed - 6:00 am Humanitarian relief activist Amed Khan describes the worsening crisis on the ground in Gaza. Join The Conversation Join Our Newsletter Original reporting. Fearless journalism. Delivered to you. [ ] [ ] I'm in Become a Member By signing up, I agree to receive emails from The Intercept and to the Privacy Policy and Terms of Use. * About * Policies And Reports * Become a Source * Join Newsletter * Become A Member * Terms of Use * Privacy * SECUREDROP (c) The Intercept. All rights reserved