https://www.nature.com/articles/d41586-023-00056-7 Skip to main content Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Advertisement Advertisement Nature * View all journals * Search * Log in * Explore content * About the journal * Publish with us * Subscribe * Sign up for alerts * RSS feed 1. nature 2. news 3. article * NEWS * 12 January 2023 Abstracts written by ChatGPT fool scientists Researchers cannot always differentiate between AI-generated and original abstracts. * Holly Else 1. Holly Else View author publications You can also search for this author in PubMed Google Scholar * Twitter * Facebook * Email You have full access to this article via your institution. Download PDF Webpage of ChatGPT is seen on OpenAI's website on a computer monitor Scientists and publishing specialists are concerned that the increasing sophistication of chatbots could undermine research integrity and accuracy.Credit: Ted Hsu/Alamy An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December^1. Researchers are divided over the implications for science. "I am very worried," says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. "If we're now in a situation where the experts are not able to determine what's true or not, we lose the middleman that we desperately need to guide us through complicated topics," she adds. The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a 'large language model', a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use. Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint^2 and an editorial^3 written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them. The researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts. Under the radar The ChatGPT-generated abstracts sailed through the plagiarism checker: the median originality score was 100%, which indicates that no plagiarism was detected. The AI-output detector spotted 66% the generated abstracts. But the human reviewers didn't do much better: they correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts. They incorrectly identified 32% of the generated abstracts as being real and 14% of the genuine abstracts as being generated. "ChatGPT writes believable scientific abstracts," say Gao and colleagues in the preprint. "The boundaries of ethical and acceptable use of large language models to help scientific writing remain to be determined." Wachter says that, if scientists can't determine whether research is true, there could be "dire consequences". As well as being problematic for researchers, who could be pulled down flawed routes of investigation, because the research they are reading has been fabricated, there are "implications for society at large because scientific research plays such a huge role in our society". For example, it could mean that research-informed policy decisions are incorrect, she adds. But Arvind Narayanan, a computer scientist at Princeton University in New Jersey, says: "It is unlikely that any serious scientist will use ChatGPT to generate abstracts." He adds that whether generated abstracts can be detected is "irrelevant". "The question is whether the tool can generate an abstract that is accurate and compelling. It can't, and so the upside of using ChatGPT is minuscule, and the downside is significant," he says. Irene Solaiman, who researches the social impact of AI at Hugging Face, an AI company with headquarters in New York and Paris, has fears about any reliance on large language models for scientific thinking. "These models are trained on past information and social and scientific progress can often come from thinking, or being open to thinking, differently from the past," she adds. The authors suggest that those evaluating scientific communications, such as research papers and conference proceedings, should put policies in place to stamp out the use of AI-generated texts. If institutions choose to allow use of the technology in certain cases, they should establish clear rules around disclosure. Earlier this month, the Fortieth International Conference on Machine Learning, a large AI conference that will be held in Honolulu, Hawaii, in July, announced that it has banned papers written by ChatGPT and other AI language tools. Solaiman adds that in fields where fake information can endanger people's safety, such as medicine, journals may have to take a more rigorous approach to verifying information as accurate. Narayanan says that the solutions to these issues should not focus on the chatbot itself, "but rather the perverse incentives that lead to this behaviour, such as universities conducting hiring and promotion reviews by counting papers with no regard to their quality or impact". doi: https://doi.org/10.1038/d41586-023-00056-7 References 1. Gao, C. A. et al. Preprint at bioRxiv https://doi.org/10.1101/ 2022.12.23.521610 (2022). 2. Blanco-Gonzalez, A. et al. Preprint at arXiv https://doi.org/ 10.48550/arXiv.2212.08104 (2022). 3. O'Connor, S. & ChatGPT Nurse Educ. Pract. 66, 103537 (2023). Article Google Scholar Download references Related Articles * Are ChatGPT and AlphaCode going to replace programmers? * [d41586-023] AI bot ChatGPT writes smart essays -- should professors worry? * [d41586-023] Could AI help you to write your next paper? Subjects * Publishing * Machine learning * Mathematics and computing Latest on: Publishing Hunting for the best bioscience software tool? Check this database Hunting for the best bioscience software tool? Check this database Technology Feature 12 JAN 23 Alzheimer's drug saga prompts journal to scrutinize whistle-blowers Alzheimer's drug saga prompts journal to scrutinize whistle-blowers News 12 JAN 23 The rise of variant XBB.1.5, and more -- this week's best science graphics The rise of variant XBB.1.5, and more -- this week's best science graphics News 10 JAN 23 Machine learning The reproducibility issues that haunt health-care AI The reproducibility issues that haunt health-care AI Technology Feature 09 JAN 23 AI system not yet ready to help peer reviewers assess research quality AI system not yet ready to help peer reviewers assess research quality Nature Index 19 DEC 22 After AlphaFold: protein-folding contest seeks next big breakthrough After AlphaFold: protein-folding contest seeks next big breakthrough News 13 DEC 22 Mathematics and computing Hunting for the best bioscience software tool? Check this database Hunting for the best bioscience software tool? Check this database Technology Feature 12 JAN 23 The reproducibility issues that haunt health-care AI The reproducibility issues that haunt health-care AI Technology Feature 09 JAN 23 Are quantum computers about to break online privacy? Are quantum computers about to break online privacy? News 06 JAN 23 Nature Careers Jobs * Gastroenterology Openings Health First Medical Group Melbourne, FL, United States * Post-Doctoral Fellow Vitalant Research Institute San Francisco, CA, United States * Senior Research Scientist Electrophysiology Van Andel Research Institute (VAI) Grand Rapids, MI, United States * Postdoctoral Fellow The Winship Cancer Institute of Emory University Atlanta, GA, United States You have full access to this article via your institution. Download PDF Related Articles * Are ChatGPT and AlphaCode going to replace programmers? * [d41586-023] AI bot ChatGPT writes smart essays -- should professors worry? * [d41586-023] Could AI help you to write your next paper? Subjects * Publishing * Machine learning * Mathematics and computing Advertisement Sign up to Nature Briefing An essential round-up of science news, opinion and analysis, delivered to your inbox every weekday. Email address [ ] [ ] Yes! Sign me up to receive the daily Nature Briefing email. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Sign up * Close Nature Briefing Sign up for the Nature Briefing newsletter -- what matters in science, free to your inbox daily. Email address [ ] Sign up [ ] I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Close Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing Explore content * Research articles * News * Opinion * Research Analysis * Careers * Books & Culture * Podcasts * Videos * Current issue * Browse issues * Collections * Subjects * Follow us on Facebook * Follow us on Twitter * Subscribe * Sign up for alerts * RSS feed About the journal * Journal Staff * About the Editors * Journal Information * Our publishing models * Editorial Values Statement * Journal Metrics * Awards * Contact * Editorial policies * History of Nature * Send a news tip Publish with us * For Authors * For Referees * Language editing services * Submit manuscript Search Search articles by subject, keyword or author [ ] Show results from [All journals] Search Advanced search Quick links * Explore articles by subject * Find a job * Guide to authors * Editorial policies Nature (Nature) ISSN 1476-4687 (online) ISSN 0028-0836 (print) nature.com sitemap About Nature Portfolio * About us * Press releases * Press office * Contact us Discover content * Journals A-Z * Articles by subject * Nano * Protocol Exchange * Nature Index Publishing policies * Nature portfolio policies * Open access Author & Researcher services * Reprints & permissions * Research data * Language editing * Scientific editing * Nature Masterclasses * Nature Research Academies * Research Solutions Libraries & institutions * Librarian service & tools * Librarian portal * Open research * Recommend to library Advertising & partnerships * Advertising * Partnerships & Services * Media kits * Branded content Career development * Nature Careers * Nature Conferences * Nature events Regional websites * Nature Africa * Nature China * Nature India * Nature Italy * Nature Japan * Nature Korea * Nature Middle East * Privacy Policy * Use of cookies * Manage cookies/Do not sell my data * Legal notice * Accessibility statement * Terms & Conditions * California Privacy Statement Springer Nature (c) 2023 Springer Nature Limited