https://arxiv.org/abs/2207.02696 close this message Accessible arXiv Do you navigate arXiv using a screen reader or other assistive technology? Are you a professor who helps students do so? We want to hear from you. Please consider signing up to share your insights as we work to make arXiv even more open. Share Insights Skip to main content Cornell University We gratefully acknowledge support from the Simons Foundation and member institutions. arxiv logo > cs > arXiv:2207.02696 [ ] Help | Advanced Search [All fields ] Search arXiv logo Cornell University Logo [ ] GO quick links * Login * Help Pages * About Computer Science > Computer Vision and Pattern Recognition arXiv:2207.02696 (cs) [Submitted on 6 Jul 2022] Title:YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors Authors:Chien-Yao Wang, Alexey Bochkovskiy, Hong-Yuan Mark Liao Download PDF Abstract: YOLOv7 surpasses all known object detectors in both speed and accuracy in the range from 5 FPS to 160 FPS and has the highest accuracy 56.8% AP among all known real-time object detectors with 30 FPS or higher on GPU V100. YOLOv7-E6 object detector (56 FPS V100, 55.9% AP) outperforms both transformer-based detector SWIN-L Cascade-Mask R-CNN (9.2 FPS A100, 53.9% AP) by 509% in speed and 2% in accuracy, and convolutional-based detector ConvNeXt-XL Cascade-Mask R-CNN (8.6 FPS A100, 55.2% AP) by 551% in speed and 0.7% AP in accuracy, as well as YOLOv7 outperforms: YOLOR, YOLOX, Scaled-YOLOv4, YOLOv5, DETR, Deformable DETR, DINO-5scale-R50, ViT-Adapter-B and many other object detectors in speed and accuracy. Moreover, we train YOLOv7 only on MS COCO dataset from scratch without using any other datasets or pre-trained weights. Source code is released in this https URL. Subjects: Computer Vision and Pattern Recognition (cs.CV) Cite as: arXiv:2207.02696 [cs.CV] (or arXiv:2207.02696v1 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2207.02696 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Chien-Yao Wang [view email] [v1] Wed, 6 Jul 2022 14:01:58 UTC (2,185 KB) Full-text links: Download: * PDF * Other formats (license) Current browse context: cs.CV < prev | next > new | recent | 2207 Change to browse by: cs References & Citations * NASA ADS * Google Scholar * Semantic Scholar a export bibtex citation Loading... Bibtex formatted citation x [loading... ] Data provided by: Bookmark BibSonomy logo Mendeley logo Reddit logo ScienceWISE logo (*) Bibliographic Tools Bibliographic and Citation Tools [ ] Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) [ ] Litmaps Toggle Litmaps (What is Litmaps?) [ ] scite.ai Toggle scite Smart Citations (What are Smart Citations?) ( ) Code & Data Code and Data Associated with this Article [ ] arXiv Links to Code Toggle arXiv Links to Code & Data (What is Links to Code & Data?) ( ) Demos Demos [ ] Replicate Toggle Replicate (What is Replicate?) ( ) Related Papers Recommenders and Search Tools [ ] Connected Papers Toggle Connected Papers (What is Connected Papers?) [ ] Core recommender toggle CORE Recommender (What is CORE?) ( ) About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs and how to get involved. Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?) * About * Help * Click here to contact arXiv Contact * Click here to subscribe Subscribe * Copyright * Privacy Policy * Web Accessibility Assistance * arXiv Operational Status Get status notifications via email or slack