https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html Artificial Intelligence * Microsoft Moderates A.I. Spending * A.I. Forecast * A.I.'s Super Bowl * What Is Vibecoding? * Quiz You have a preview view of this article while we are checking your access. When we have confirmed access, the full article content will load. [00HALLUCIN] Credit...Erik Carter Skip to contentSkip to site index A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse A new wave of "reasoning" systems from companies like OpenAI is producing incorrect information more often. Even the companies don't know why. Credit...Erik Carter Supported by SKIP ADVERTISEMENT Listen to this article * 8:39 min Learn more * Share full article * * * 539 539 Cade MetzKaren Weise By Cade Metz and Karen Weise Cade Metz reported from San Francisco, and Karen Weise from Seattle. * May 5, 2025 Last month, an A.I. bot that handles tech support for Cursor, an up-and-coming tool for computer programmers, alerted several customers about a change in company policy. It said they were no longer allowed to use Cursor on more than just one computer. In angry posts to internet message boards, the customers complained. Some canceled their Cursor accounts. And some got even angrier when they realized what had happened: The A.I. bot had announced a policy change that did not exist. "We have no such policy. You're of course free to use Cursor on multiple machines," the company's chief executive and co-founder, Michael Truell, wrote in a Reddit post. "Unfortunately, this is an incorrect response from a front-line A.I. support bot." More than two years after the arrival of ChatGPT, tech companies, office workers and everyday consumers are using A.I. bots for an increasingly wide array of tasks. But there is still no way of ensuring that these systems produce accurate information. The newest and most powerful technologies -- so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek -- are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why. Today's A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not -- and cannot -- decide what is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79 percent. We are having trouble retrieving the article content. Please enable JavaScript in your browser settings. --------------------------------------------------------------------- Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times. --------------------------------------------------------------------- Thank you for your patience while we verify access. Already a subscriber? Log in. Want all of The Times? Subscribe. Advertisement SKIP ADVERTISEMENT Site Index Site Information Navigation * (c) 2025 The New York Times Company * NYTCo * Contact Us * Accessibility * Work with us * Advertise * T Brand Studio * Your Ad Choices * Privacy Policy * Terms of Service * Terms of Sale * Site Map * Canada * International * Help * Subscriptions * Manage Privacy Preferences