https://arxiv.org/abs/2404.01332 Change to arXiv's privacy policy The arXiv Privacy Policy has changed. By continuing to use arxiv.org, you are agreeing to the privacy policy. I Understand Skip to main content Cornell University We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors. Donate arxiv logo > cs > arXiv:2404.01332 [ ] Help | Advanced Search [All fields ] Search arXiv logo Cornell University Logo [ ] GO quick links * Login * Help Pages * About Computer Science > Computation and Language arXiv:2404.01332 (cs) [Submitted on 29 Mar 2024 (v1), last revised 12 Nov 2024 (this version, v3)] Title:Explaining Large Language Models Decisions Using Shapley Values Authors:Behnam Mohammadi View a PDF of the paper titled Explaining Large Language Models Decisions Using Shapley Values, by Behnam Mohammadi View PDF Abstract:The emergence of large language models (LLMs) has opened up exciting possibilities for simulating human behavior and cognitive processes, with potential applications in various domains, including marketing research and consumer behavior analysis. However, the validity of utilizing LLMs as stand-ins for human subjects remains uncertain due to glaring divergences that suggest fundamentally different underlying processes at play and the sensitivity of LLM responses to prompt variations. This paper presents a novel approach based on Shapley values from cooperative game theory to interpret LLM behavior and quantify the relative contribution of each prompt component to the model's output. Through two applications - a discrete choice experiment and an investigation of cognitive biases - we demonstrate how the Shapley value method can uncover what we term "token noise" effects, a phenomenon where LLM decisions are disproportionately influenced by tokens providing minimal informative content. This phenomenon raises concerns about the robustness and generalizability of insights obtained from LLMs in the context of human behavior simulation. Our model-agnostic approach extends its utility to proprietary LLMs, providing a valuable tool for practitioners and researchers to strategically optimize prompts and mitigate apparent cognitive biases. Our findings underscore the need for a more nuanced understanding of the factors driving LLM responses before relying on them as substitutes for human subjects in survey settings. We emphasize the importance of researchers reporting results conditioned on specific prompt templates and exercising caution when drawing parallels between human behavior and LLMs. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2404.01332 [cs.CL] (or arXiv:2404.01332v3 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2404.01332 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Behnam Mohammadi [view email] [v1] Fri, 29 Mar 2024 22:49:43 UTC (3,929 KB) [v2] Fri, 13 Sep 2024 00:56:01 UTC (3,940 KB) [v3] Tue, 12 Nov 2024 01:06:22 UTC (5,441 KB) Full-text links: Access Paper: View a PDF of the paper titled Explaining Large Language Models Decisions Using Shapley Values, by Behnam Mohammadi * View PDF * Other Formats view license Current browse context: cs.CL < prev | next > new | recent | 2024-04 Change to browse by: cs cs.AI cs.LG References & Citations * NASA ADS * Google Scholar * Semantic Scholar a export BibTeX citation Loading... BibTeX formatted citation x [loading... ] Data provided by: Bookmark BibSonomy logo Reddit logo (*) Bibliographic Tools Bibliographic and Citation Tools [ ] Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) [ ] Connected Papers Toggle Connected Papers (What is Connected Papers?) [ ] Litmaps Toggle Litmaps (What is Litmaps?) [ ] scite.ai Toggle scite Smart Citations (What are Smart Citations?) ( ) Code, Data, Media Code, Data and Media Associated with this Article [ ] alphaXiv Toggle alphaXiv (What is alphaXiv?) [ ] Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) [ ] DagsHub Toggle DagsHub (What is DagsHub?) [ ] GotitPub Toggle Gotit.pub (What is GotitPub?) [ ] Huggingface Toggle Hugging Face (What is Huggingface?) [ ] Links to Code Toggle Papers with Code (What is Papers with Code?) [ ] ScienceCast Toggle ScienceCast (What is ScienceCast?) ( ) Demos Demos [ ] Replicate Toggle Replicate (What is Replicate?) [ ] Spaces Toggle Hugging Face Spaces (What is Spaces?) [ ] Spaces Toggle TXYZ.AI (What is TXYZ.AI?) ( ) Related Papers Recommenders and Search Tools [ ] Link to Influence Flower Influence Flower (What are Influence Flowers?) [ ] Core recommender toggle CORE Recommender (What is CORE?) * Author * Venue * Institution * Topic ( ) About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?) * About * Help * Click here to contact arXiv Contact * Click here to subscribe Subscribe * Copyright * Privacy Policy * Web Accessibility Assistance * arXiv Operational Status Get status notifications via email or slack