https://arxiv.org/abs/2206.00272 close this message arXiv smileybones icon Giving Week! Show your support for Open Science by donating to arXiv during Giving Week, April 25th-29th. DONATE Skip to main content Cornell University We gratefully acknowledge support from the Simons Foundation and member institutions. arxiv logo > cs > arXiv:2206.00272 [ ] Help | Advanced Search [All fields ] Search arXiv logo Cornell University Logo [ ] GO quick links * Login * Help Pages * About Computer Science > Computer Vision and Pattern Recognition arXiv:2206.00272 (cs) [Submitted on 1 Jun 2022] Title:Vision GNN: An Image is Worth Graph of Nodes Authors:Kai Han, Yunhe Wang, Jianyuan Guo, Yehui Tang, Enhua Wu Download PDF Abstract: Network architecture plays a key role in the deep learning-based computer vision system. The widely-used convolutional neural network and transformer treat the image as a grid or sequence structure, which is not flexible to capture irregular and complex objects. In this paper, we propose to represent the image as a graph structure and introduce a new Vision GNN (ViG) architecture to extract graph-level feature for visual tasks. We first split the image to a number of patches which are viewed as nodes, and construct a graph by connecting the nearest neighbors. Based on the graph representation of images, we build our ViG model to transform and exchange information among all the nodes. ViG consists of two basic modules: Grapher module with graph convolution for aggregating and updating graph information, and FFN module with two linear layers for node feature transformation. Both isotropic and pyramid architectures of ViG are built with different model sizes. Extensive experiments on image recognition and object detection tasks demonstrate the superiority of our ViG architecture. We hope this pioneering study of GNN on general visual tasks will provide useful inspiration and experience for future research. The PyTroch code will be available at this https URL and the MindSpore code will be avaiable at this https URL. Comments: tech report Subjects: Computer Vision and Pattern Recognition (cs.CV) Cite as: arXiv:2206.00272 [cs.CV] (or arXiv:2206.00272v1 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2206.00272 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Kai Han [view email] [v1] Wed, 1 Jun 2022 07:01:04 UTC (5,884 KB) Full-text links: Download: * PDF * Other formats (license) Current browse context: cs.CV < prev | next > new | recent | 2206 Change to browse by: cs References & Citations * NASA ADS * Google Scholar * Semantic Scholar a export bibtex citation Loading... Bibtex formatted citation x [loading... ] Data provided by: Bookmark BibSonomy logo Mendeley logo Reddit logo ScienceWISE logo (*) Bibliographic Tools Bibliographic and Citation Tools [ ] Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) [ ] Litmaps Toggle Litmaps (What is Litmaps?) [ ] scite.ai Toggle scite Smart Citations (What are Smart Citations?) ( ) Code & Data Code and Data Associated with this Article [ ] arXiv Links to Code Toggle arXiv Links to Code & Data (What is Links to Code & Data?) ( ) Demos Demos [ ] Replicate Toggle Replicate (What is Replicate?) ( ) Related Papers Recommenders and Search Tools [ ] Connected Papers Toggle Connected Papers (What is Connected Papers?) [ ] Core recommender toggle CORE Recommender (What is CORE?) ( ) About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs and how to get involved. Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?) * About * Help * Click here to contact arXiv Contact * Click here to subscribe Subscribe * Copyright * Privacy Policy * Web Accessibility Assistance * arXiv Operational Status Get status notifications via email or slack