https://arxiv.org/abs/2204.10850 close this message arXiv smileybones icon Giving Week! Show your support for Open Science by donating to arXiv during Giving Week, April 25th-29th. DONATE Skip to main content Cornell University We gratefully acknowledge support from the Simons Foundation and member institutions. arxiv logo > cs > arXiv:2204.10850 [ ] Help | Advanced Search [All fields ] Search arXiv logo Cornell University Logo [ ] GO quick links * Login * Help Pages * About Computer Science > Computer Vision and Pattern Recognition arXiv:2204.10850 (cs) [Submitted on 22 Apr 2022] Title:Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation Authors:Verica Lazova, Vladimir Guzov, Kyle Olszewski, Sergey Tulyakov, Gerard Pons-Moll Download PDF Abstract: We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis. While NeRF-based approaches are effective for novel view synthesis, such models memorize the radiance for every point in a scene within a neural network. Since these models are scene-specific and lack a 3D scene representation, classical editing such as shape manipulation, or combining scenes is not possible. Hence, editing and combining NeRF-based scenes has not been demonstrated. With the aim of obtaining interpretable and controllable scene representations, our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network. With this hybrid representation, we decouple neural rendering from scene-specific geometry and appearance. We can generalize to novel scenes by optimizing only the scene-specific 3D feature representation, while keeping the parameters of the rendering network fixed. The rendering function learnt during the initial training stage can thus be easily applied to new scenes, making our approach more flexible. More importantly, since the feature volumes are independent of the rendering model, we can manipulate and combine scenes by editing their corresponding feature volumes. The edited volume can then be plugged into the rendering model to synthesize high-quality novel views. We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results. Subjects: Computer Vision and Pattern Recognition (cs.CV) Cite as: arXiv:2204.10850 [cs.CV] (or arXiv:2204.10850v1 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2204.10850 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Kyle Olszewski [view email] [v1] Fri, 22 Apr 2022 17:57:00 UTC (29,181 KB) Full-text links: Download: * PDF * Other formats (license) Current browse context: cs.CV < prev | next > new | recent | 2204 Change to browse by: cs References & Citations * NASA ADS * Google Scholar * Semantic Scholar a export bibtex citation Loading... Bibtex formatted citation x [loading... ] Data provided by: Bookmark BibSonomy logo Mendeley logo Reddit logo ScienceWISE logo (*) Bibliographic Tools Bibliographic and Citation Tools [ ] Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) [ ] Litmaps Toggle Litmaps (What is Litmaps?) [ ] scite.ai Toggle scite Smart Citations (What are Smart Citations?) ( ) Code & Data Code and Data Associated with this Article [ ] arXiv Links to Code Toggle arXiv Links to Code & Data (What is Links to Code & Data?) ( ) Demos Demos [ ] Replicate Toggle Replicate (What is Replicate?) ( ) Related Papers Recommenders and Search Tools [ ] Connected Papers Toggle Connected Papers (What is Connected Papers?) [ ] Core recommender toggle CORE Recommender (What is CORE?) ( ) About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs and how to get involved. Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?) * About * Help * Click here to contact arXiv Contact * Click here to subscribe Subscribe * Copyright * Privacy Policy * Web Accessibility Assistance * arXiv Operational Status Get status notifications via email or slack