[HN Gopher] 3D Novel View Synthesis with Diffusion Models
___________________________________________________________________
3D Novel View Synthesis with Diffusion Models
Author : dougabug
Score : 36 points
Date : 2022-10-04 19:55 UTC (3 hours ago)
(HTM) web link (3d-diffusion.github.io)
(TXT) w3m dump (3d-diffusion.github.io)
| dr_dshiv wrote:
| It seems like this be used to create multiple views for fine
| tuning Stable Diffusion (textual inversion), from a single image.
| dougabug wrote:
| This approach is interesting in that it applies image-to-image
| diffusion modeling to autoregressively generate 3D consistent
| novel views, starting with even a single reference 2D image.
| Unlike some other approaches, a NeRF is not needed as an
| intermediate representation.
| muschellij2 wrote:
| Soon to be the Face Back APP!
| mlajtos wrote:
| Ok, NeRFs were a distraction then.
| oifjsidjf wrote:
| >> In order to maximize the reproducibility of our results, we
| provide code in JAX (Bradbury et al., 2018) for our proposed
| X-UNet neural architecture from Section 2.3
|
| Nice.
|
| OpenAI shitting their pants even more.
| astrange wrote:
| Oh, OpenAI does more or less release that much. People don't
| have issues implementing the models from their papers.
|
| What they don't do is release the actual models and datasets,
| and it's very expensive to retrain those.
___________________________________________________________________
(page generated 2022-10-04 23:00 UTC)