https://shihmengli.github.io/3D-Photo-Inpainting/ Home Examples Abstract Video Links BibTex Acknowledgments 3D Photography using Context-aware Layered Depth Inpainting Meng-Li Shih^1,2 Shih-Yang Su^1 Johannes Kopf^3 Jia-Bin Huang^1 ^1Virginia Tech ^2National Tsing Hua University ^3Facebook Abstract We propose a method for converting a single RGB-D input image into a 3D photo, i.e., a multi-layer representation for novel view synthesis that contains hallucinated color and depth structures in regions occluded in the original view. We use a Layered Depth Image with explicit pixel connectivity as underlying representation, and present a learning-based inpainting model that iteratively synthesizes new local color-and-depth content into the occluded region in a spatial context-aware manner. The resulting 3D photos can be efficiently rendered with motion parallax using standard graphics engines. We validate the effectiveness of our method on a wide range of challenging everyday scenes and show fewer artifacts when compared with the state-of-the-arts. Video Results * Family Photos * Legacy Photos * 2.5D Parallax Effect * Comparison with State of the art * Dolly Zoom Effect Links [supp_websi] Supplementary Website [zip] Supplementary Result [pdf] Supplementary Document [zip] Evaluation Code [zip] Testing Set (RealEstate10K) [github] Code [colab] Demo Paper [pdf] 3D Photography using Context-aware Layered Depth Inpainting Citation Meng-Li Shih, Shih-Yang Su, Johannes Kopf, and Jia-Bin Huang. "3D Photography using Context-aware Layered Depth Inpainting", in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020 BibTex @inproceedings{Shih3DP20, author = {Shih, Meng-Li and Su, Shih-Yang and Kopf, Johannes and Huang, Jia-Bin}, title = {3D Photography using Context-aware Layered Depth Inpainting}, booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2020} } Acknowledgments We thank Pratul Srinivasan for providing clarification of the method [Srinivasan et al. CVPR 2019]. We thank the author of [Zhou et al. 2018, Choi et al. 2019, Mildenhall et al. 2019, Srinivasan et al. 2019, Wiles et al. 2020, Niklaus et al. 2019] for providing their implementations online. Part of our codes are based on MiDaS, edge-connect and pytorch-inpainting-with-partial-conv. [nsf] [most] Copyright (c) Meng-Li Shih 2020