https://arxiv.org/abs/2410.23168 Skip to main content Cornell University We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors. Donate arxiv logo > cs > arXiv:2410.23168 [ ] Help | Advanced Search [All fields ] Search arXiv logo Cornell University Logo [ ] GO quick links * Login * Help Pages * About Computer Science > Machine Learning arXiv:2410.23168 (cs) [Submitted on 30 Oct 2024] Title:TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters Authors:Haiyang Wang, Yue Fan, Muhammad Ferjad Naeem, Yongqin Xian, Jan Eric Lenssen, Liwei Wang, Federico Tombari, Bernt Schiele View a PDF of the paper titled TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters, by Haiyang Wang and 7 other authors View PDF HTML (experimental) Abstract:Transformers have become the predominant architecture in foundation models due to their excellent performance across various domains. However, the substantial cost of scaling these models remains a significant concern. This problem arises primarily from their dependence on a fixed number of parameters within linear projections. When architectural modifications (e.g., channel dimensions) are introduced, the entire model typically requires retraining from scratch. As model sizes continue growing, this strategy results in increasingly high computational costs and becomes unsustainable. To overcome this problem, we introduce TokenFormer, a natively scalable architecture that leverages the attention mechanism not only for computations among input tokens but also for interactions between tokens and model parameters, thereby enhancing architectural flexibility. By treating model parameters as tokens, we replace all the linear projections in Transformers with our token-parameter attention layer, where input tokens act as queries and model parameters as keys and values. This reformulation allows for progressive and efficient scaling without necessitating retraining from scratch. Our model scales from 124M to 1.4B parameters by incrementally adding new key-value parameter pairs, achieving performance comparable to Transformers trained from scratch while greatly reducing training costs. Code and models are available at \url{this https URL}. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2410.23168 [cs.LG] (or arXiv:2410.23168v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2410.23168 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Haiyang Wang [view email] [v1] Wed, 30 Oct 2024 16:19:00 UTC (671 KB) Full-text links: Access Paper: View a PDF of the paper titled TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters, by Haiyang Wang and 7 other authors * View PDF * HTML (experimental) * TeX Source * Other Formats view license Current browse context: cs.LG < prev | next > new | recent | 2024-10 Change to browse by: cs References & Citations * NASA ADS * Google Scholar * Semantic Scholar a export BibTeX citation Loading... BibTeX formatted citation x [loading... ] Data provided by: Bookmark BibSonomy logo Reddit logo (*) Bibliographic Tools Bibliographic and Citation Tools [ ] Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) [ ] Connected Papers Toggle Connected Papers (What is Connected Papers?) [ ] Litmaps Toggle Litmaps (What is Litmaps?) [ ] scite.ai Toggle scite Smart Citations (What are Smart Citations?) ( ) Code, Data, Media Code, Data and Media Associated with this Article [ ] alphaXiv Toggle alphaXiv (What is alphaXiv?) [ ] Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) [ ] DagsHub Toggle DagsHub (What is DagsHub?) [ ] GotitPub Toggle Gotit.pub (What is GotitPub?) [ ] Huggingface Toggle Hugging Face (What is Huggingface?) [ ] Links to Code Toggle Papers with Code (What is Papers with Code?) [ ] ScienceCast Toggle ScienceCast (What is ScienceCast?) ( ) Demos Demos [ ] Replicate Toggle Replicate (What is Replicate?) [ ] Spaces Toggle Hugging Face Spaces (What is Spaces?) [ ] Spaces Toggle TXYZ.AI (What is TXYZ.AI?) ( ) Related Papers Recommenders and Search Tools [ ] Link to Influence Flower Influence Flower (What are Influence Flowers?) [ ] Core recommender toggle CORE Recommender (What is CORE?) [ ] IArxiv recommender toggle IArxiv Recommender (What is IArxiv?) * Author * Venue * Institution * Topic ( ) About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?) * About * Help * Click here to contact arXiv Contact * Click here to subscribe Subscribe * Copyright * Privacy Policy * Web Accessibility Assistance * arXiv Operational Status Get status notifications via email or slack