(C) Journalist's Resource This story was originally published by Journalist's Resource and is unaltered. . . . . . . . . . . Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown [1] ['Felix M. Simon', 'Sacha Altay', 'Hugo Mercier', 'Oxford Internet Institute', 'University Of Oxford', 'Department Of Political Science', 'University Of Zurich', 'Institut Jean Nicod', 'Département D Études Cognitives', 'Ens'] Date: 2023-10-18 05:04:05+00:00 Article Metrics 0 CrossRef Citations Altmetric Score 606 PDF Downloads 8513 Page Views Many observers of the current explosion of generative AI worry about its impact on our information environment, with concerns being raised about the increased quantity, quality, and personalization of misinformation. We assess these arguments with evidence from communication studies, cognitive science, and political science. We argue that current concerns about the effects of generative AI on the misinformation landscape are overblown. Introduction Recent progress in generative AI has led to concerns that it will “trigger the next misinformation nightmare” (Gold & Fisher, 2023), that people “will not be able to know what is true anymore” (Metz, 2023), and that we are facing a “tech-enabled Armageddon” (Scott, 2023). Generative AI systems are capable of generating new forms of data by applying machine learning to large quantities of training data. This new data can include text (such as Google’s Bard, Meta’s LLaMa, or OpenAI’s ChatGPT), visuals (such as Stable Diffusion or OpenAI’s DALL-E), or audio (such as Microsoft’s VALL-E). The output that can be produced with these systems at great speed and ease for a majority of users is, depending on the instructions, sufficiently sophisticated that humans can perceive it as indistinguishable from human-generated content (Groh et al., 2022). According to various voices, including some leading AI researchers, generative AI will make it easier to create realistic but false or misleading content at scale, with potentially catastrophic outcomes for people’s beliefs and behaviors, the public arena of information, and democracy. These concerns can be divided in four categories (Table 1). Argument Explanation of claim Presumed effect Source 1. Increased quantity of misinformation Due to the ease of access and use, generative AIs can be used to create mis-/disinformation at scale at little to no cost to individuals and organized actors Increased quantity of misinformation allows ill-intentioned actors to “flood the zone” with incorrect or misleading information, thus drowning out factual content and/or sowing confusion Bell (2023), Fried (2023), Hsu & Thompson (2023), Marcus (2023), Pasternack (2023), Ordonez et al. (2023), Tucker (2023), Zagni & Canetta (2023) 2. Increased quality of misinformation Due to their technical capabilities and ease of use, generative AIs can be used to create higher-quality misinformation Increased quality of misinformation leads to increased persuasive potential, as it creates content that is more plausible and harder to debunk or verify. This would either allow for the spread of false information or contribute (with the increased quantity of misinformation) to an epistemic crisis, a general loss of trust in all types of news Epstein & Hertzmann (2023), Fried (2023), Goldstein et al. (2023), Hsu & Thompson (2023), Pasternack (2023), Tiku (2022), Tucker (2023) 3. Increased personalization of misinformation Due to their technical capabilities and ease of use, generative AIs can be used to create high-quality misinformation personalized to a user’s tastes and preferences Increased persuasion of consumers of misinformation, with the same outcomes as above Benson (2023), Fried (2023), Hsu & Thompson (2023), Pasternack (2023) 4. Involuntary generation of plausible but false information Generative AIs can generate useful content (e.g., chatbots generating code). However, they can also generate plausible-looking information that is entirely inaccurate. Without intending to, users could thus generate misinformation, which could potentially spread Misinforming users of generative AI and potentially those with whom they share the information Fried (2023), Gold & Fischer (2023), Ordonez et al. (2023), Pasternack (2023), Shah & Bender (2023), Zagni & Canetta (2023) Table 1. Four arguments for why we should worry about the impact of generative AI on misinformation, from recent scientific papers, news articles, and social media. We review, in turn, the first three arguments—quantity, quality, and personalization—arguing that they are at the moment speculative, and that existing research suggests at best modest effects of generative AI on the misinformation landscape. Looking at each of these themes in turn, we argue that current concerns about the effects of generative AI are overblown. We do not address the fourth argument in detail here, as it lies beyond the scope of this commentary and is too idiosyncratic to the constantly evolving versions of generative AI tools under discussion and the context of use (e.g., GPT-4 avoids many of the mistakes made by GPT-2 or -3). People with low online literacy will likely be misled by some of these tools, but we do not see a clear route for the mistakes that generative AI makes, in the hands of well-intentioned users, to spread and create a significant risk for society. Similarly, the risk of factually inaccurate information accidentally appearing in news content as news media increasingly make use of generative AI (Hanley & Durumeric, 2023) will plausibly be curtailed by publishers’ efforts to control the use of the technology in news production and distribution (Arguedas & Simon, 2023; Becker et al., 2023), although failure on the part of publishers to implement such measures or a fundamental lack of awareness regarding the issue remain a concern. Increased quantity of misinformation Generative AI makes it easier to create misinformation, which could increase the supply of misinformation. However, it is not because there is more misinformation that people will necessarily consume more of it. Instead, we argue here that the consumption of misinformation is mostly limited by demand and not by supply. Increases in the supply of misinformation should only increase the diffusion of misinformation if there is currently an unmet demand and/or a limited supply of misinformation. Neither possibility is supported by evidence. Regarding limited supply, the already low costs of misinformation production and access, and the large number of misinformation posts that currently exist but go unnoticed, means that generative AI has very little room to operate. Regarding unmet demand, given the creativity humans have showcased throughout history to make up (false) stories and the freedom that humans already have to create and spread misinformation across the world, it is unlikely that a large part of the population is looking for misinformation they cannot find online or offline. Moreover, as we argue below, demand for misinformation is relatively easy to meet because the particular content of misinformation is less important than the broad narrative it supports. In absolute terms, misinformation already abounds online and, unlike high-quality news or scientific articles, it is rarely behind paywalls. Yet, despite the quantity and accessibility of misinformation, the average internet user consumes very little of it (e.g., Allen et al., 2020; for review, see Acerbi et al., 2022). Instead, misinformation consumption is heavily concentrated in a small portion of very active and vocal users (Grinberg et al. 2019). What makes misinformation consumers special is not that they have privileged access to misinformation but traits that make them more likely to seek out misinformation (Broniatowski et al., 2023; Motta et al., 2023), such as having low trust in institutions or being strong partisans (Osmundsen et al., 2021). Experts on misinformation view partisanship and identity as key determinants of misinformation belief and sharing, while they believe lack of access to reliable information only plays a negligible role (Altay et al., 2023). The problem is not that people do not have access to high-quality information but instead that they reject high-quality information and favor misinformation. Similarly, conspiracy theories exist everywhere and are easily accessible online across the globe. Yet, despite similarities in supply, demand for conspiracy theories varies across countries, such that in more corrupt countries, conspiracy theories are more popular (Alper, 2023; Cordonier & Cafiero, 2023). Finally, (mis)information, on its own, has no causal effect on the world. (Mis)information only gains causal powers when humans see it. Yet, the number of things that go viral on the internet and get seen is finite because our attention is finite (Jungherr & Schroeder, 2021a; Taylor, 2014). And since generative AI is unlikely to increase demand for misinformation and will not increase the number of things humans can pay attention to, the increase in misinformation supply will likely have limited influence on the diffusion of misinformation. Increased quality of misinformation Another argument suggesting that generative AI raises a significant threat to the public arena is that generative AI can help create misinformation that is more persuasive than that created by current means. For instance, in the same way as generative AI can create a text in the style of a limerick or of a particular author, generative AIs could create content that looks more reliable, professional, scientific, and accurate (by using sophisticated words, the appropriate tone, scientific looking references, etc.). Experimental studies have shown that the credibility of online sources can be affected by such features (e.g., Metzger, 2007), lending support to the argument. However, there are at least three reasons why this may not be a significant cause for concern. First, it seems that it would already be relatively easy for producers of misinformation to increase the perceived reliability of their content. In the realm of visuals, Photoshop has long afforded bad actors the ability to make an artificially created image look real (Kapoor & Narayanan, 2023). If they opt not to do so, it might be because making a text or image look more reliable or real might conflict with other goals, such as making the same more accessible, appealing, or seemingly authentic. It’s not clear that generative AIs could increase content quality on multiple dimensions, as there might be some unavoidable tradeoffs. Second, even if generative AI managed to increase the overall appeal of misinformation, most people are simply not exposed, or only very marginally exposed, to misinformation (see above and Acerbi et al., 2022). Instead, most people overwhelmingly consume content from mainstream sources, typically the same handful of popular media outlets (Altay et al., 2022; Guess et al., 2021). As a result, any increase in the quality of misleading content would be largely invisible to most of the public. Third, generative AI could also help increase the quality of reliable news sources—for instance, by facilitating the work of journalists in some areas. Given the rarity of misleading content compared to reliable content, the increase in the appeal of misinformation would have to be 20 to 100 times larger than the increase in the appeal of reliable content for the effects of generative AI to tip the scales in favor of misinformation (Acerbi et al., 2022). We are not aware of an argument that would suggest such a massive imbalance in the effects of generative AI on misinformation and on reliable information. Fourth, it has been argued that generative AI provides actors with a new argument for plausible deniability in what has been called “the liar’s dividend”—where the availability of a technology creating high-quality content can be used to dismiss incriminating evidence as fake (Christopher, 2023). However, while there is limited evidence that such a strategy can have some effect, for example for politicians (Schiff et al., 2023), this possibility does not hinge on the technology itself. As mentioned above, technology that enables creating plausible fake content has been available for (at least) decades, and it has already been used in attempts to discredit evidence. Arguably, the major factor deciding the effectiveness of such attempts is not the plausibility of a given technology being able to generate some content but other factors, such as partisanship or people’s preexisting trust in the individual attempting to discredit the evidence (Ecker et al., 2022). Increased personalization of misinformation The final argument is that generative AI will make it easier to create and micro-target users with personalized misinformation that plays to their beliefs and preferences, thus making it easier to persuade or mislead them. Looking at the abilities of generative AI to mimic a variety of styles and personalities, this certainly seems plausible. However, there are also problems with this argument. First, the technological infrastructures that enable micro-targeting of users with content are not directly impacted by improvements in generative AI (although they might improve through advances in AI more broadly). As a result, generative AI should not affect the efficiency of the infrastructure by which content reaches individuals. The cost of reaching people with misinformation, rather than the cost of creating it, remains a bottleneck (see also Kapoor & Narayanan, 2023). In addition, the evidence suggests that micro-targeting by, for example, political actors has mostly limited persuasive effects on the majority of recipients (Jungherr et al., 2020; Simon, 2019), not least because many people do not pay attention to these messages in the first place (Kahloon & Ramani, 2023). Still, generative AI might be able to improve on the content of already targeted misinformation, making it more suited to its target. However, the evidence on the effectiveness of political advertising personalized to target, for instance, people with different personalities is mixed, with at best small and context-dependent effects (Zarouali et al., 2022). In addition, the assumption that generative AI will be able to create more personalized and thus more persuasive content is so far unproven. Current generative AIs are trained in intervals on a large general corpus of data, aided by approaches such as reinforcement learning with human feedback (RLHF) or retrieval augmented generation (RAG), where an information retrieval system provides more up-to-date data when an LLM produces output. However, LLMs are currently unable to represent the full range of users’ preferences and values (Kirk et al., 2023), and do not hold direct information about users themselves, which severely limits their ability to create truly personalized content that would match an individual’s preferences (Newport, 2023). Even with advances on these fronts, current evidence suggests that the persuasive effects of microtargeting—including with LLMs—are often limited and highly context-dependent (Hackenburg & Margetts, 2023; Tappin et al., 2023). In general, the effects of political advertising are small and will likely remain so, regardless of whether they are (micro)targeted or not, because persuasion is difficult (Coppock, 2023; Mercier, 2020). Conclusion We have argued that concerns over the effects of generative AI on the information landscape—and, in particular, the spread of misinformation—are overblown. These concerns are part of an old and broad family of moral panics surrounding new technologies (Jungherr & Schroeder, 2021b; Orben, 2020; Simon & Camargo, 2021). When it comes to new information technologies, such panics might be based on the mistaken assumption that people are gullible, driven in part by the third-person effect (Altay & Acerbi, 2023; Mercier, 2020). These concerns also tend to overlook the fact that we already owe our current information environment to a complex web of institutions that has allowed the media to provide broadly accurate information, and for the public, in turn, to trust much of the information communicated by the media. These institutions have already evolved to accommodate new media, such as film and photography (Habgood-Cote, 2023; Jurgenson, 2019), even though it has always been possible to manipulate these media. Moreover, it’s far from clear that the more technologically complex forms of manipulation are the most efficient: nowadays, journalists and fact checkers struggle not so much with deepfakes but with visuals taken out of context or with crude manipulations, such as cropping of images or so-called “cheapfakes” (Brennen et al., 2020; Kapoor & Narayanan, 2023; Paris & Donovan, 2019; Weikmann & Lecheler, 2023). A limitation of our argument is that we mostly rely on evidence about the media environment of wealthy, democratic countries with rich and competitive media ecosystems. Less data is available in other countries, and we cannot rule out that generative AI might have a larger negative effect there (although, arguably, generative AI could also have a larger positive effect in these countries). We are not arguing that nothing needs to be done to regulate or address generative AI. If misinformation is so rare in the information environment of wealthy, democratic countries, it is thanks to the hard work of professionals—journalists, fact checkers, experts, etc.—and to the norms and know-how that have developed over time in these professions (e.g., Paris and Donovan, 2019; Silverman, 2014). Strengthening these institutions and trust in reliable news overall (Acerbi et al., 2022) will likely be pivotal. Journalists, fact checkers, authorities, and human rights advocates will also face new challenges, and they will have to develop new norms and practices to cope with generative AI. This includes, for instance, norms and know-how related to disclosure, “fingerprinting” of content, and the establishment of provenance mechanisms. Digital and media literacy education could also help abate issues arising with AI-generated misinformation (e.g., Doss et al., 2023). Time will tell whether alarmist headlines about generative AI were warranted or not, but regardless of the outcome, the discussion of the impact of generative AI on misinformation would benefit from being more nuanced and evidence-based, especially against the backdrop of ongoing regulatory efforts. The Council of Europe (2019), for example, stresses that the exercise and enjoyment of individual human rights and “the dignity of all humans as independent moral agents” should be protected from underexplored forms of algorithmic (aided) persuasion which “may have significant effects on the cognitive autonomy of individuals and their right to form opinions and take independent decisions” (paragraph 9). Elsewhere, the upcoming EU AI Act will likely include stipulations regarding how the issues discussed here should be best addressed, while the United States has released a blueprint for an “AI Bill of Rights” and is trying to regulate the use of AI in a fraught political environment. In addition, there are efforts by non-governmental organizations such as Reporters Without Borders to draft guidelines that “safeguard information integrity” amid the growing use of AI for information production, dissemination, retrieval, and consumption. Yet, while such efforts are laudable and required, they should be based on the best available evidence—especially if this evidence questions received wisdom. Excessive and speculative warnings about the ill effects of AI on the public arena and democracy, even if well-intentioned, can also have negative externalities, such as reducing trust in factually accurate news and the institutions that produce it (Hameleers, 2023) or overshadowing other problems posed by generative AI, like nonconsensual pornography disproportionately harming women even if they do not scale up (Kapoor & Narayanan, 2023), or the potential for identity thefts and scams. Our aim is not to settle or close the discussion around the possible effects of generative AI on our information environment. We also do not wish to simply dismiss concerns around the technology. Instead, in the spirit of Robert Merton’s observation that “the goal of science is the extension of certified knowledge” on the basis of “organized skepticism” (Merton, 1973, pp. 267–278) we hope to contribute to the former by injecting some of the latter into current debates on the possible effects of generative AI. [END] --- [1] Url: https://misinforeview.hks.harvard.edu/article/misinformation-reloaded-fears-about-the-impact-of-generative-ai-on-misinformation-are-overblown/ Published and (C) by Journalist's Resource Content appears here under this condition or license: Creative Commons CC BY-NC-ND 4.0. via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/journalistsresource/