[HN Gopher] Point-E: Point cloud diffusion for 3D model synthesis
___________________________________________________________________
Point-E: Point cloud diffusion for 3D model synthesis
Author : smusamashah
Score : 120 points
Date : 2022-12-20 11:15 UTC (11 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| acreatureofhab wrote:
| This is bananas... the metaverse may be possible with tech like
| this being available for the masses.
| bilsbie wrote:
| Can I 3d print this?
| tarr11 wrote:
| See also Magic3D from Nvidia, which generates mesh models.
|
| https://deepimagination.cc/Magic3D/
| dr_dshiv wrote:
| Anyone else using Kaedim to translate 2d images to 3d models?
| https://www.kaedim3d.com/
|
| We made some midjourney lamps--and then printed them! Pretty
| cool.
| virtualritz wrote:
| Looking at their prices and the (impressive) quality of the
| mesh topology in the demo movie they have on their web page
| (the rat) I couldn't help but think this a front that pretends
| to use pure AI but actually has real people (specialized
| mechanical turks) involved.
|
| Specifically for guiding generation of the mesh from a possibly
| AI-generated point cloud (PTC). E.g. using manual contraints on
| an mostly automatic quad (re-)mesher ran as a post process on
| the triangle soup obtained from meshing the original, AI-
| generated PTC.
|
| I.e.:
|
| 1. AI-generate PTC from image(s).
|
| 2. Auto-generate triangle mesh via marching cubes or whatever
| from PTC.
|
| 3. Quad re-mesh with mesh-guided automatic constraint discovery
| (think edges, corners etc.).
|
| 4. Manual edit quad-mesher constraints.
|
| 5. Quad re-mesh.
|
| That would explain their pricing which seems a tad too high for
| a fully automatic solution. $600 for 30 models x 10 iterations.
| I.e. each iteration would cost $2.
|
| Or maybe it's just so niche this is simply because of number of
| users for now and indeed fully automatic.
|
| Curious to hear what other people involved in 3D and cloud
| compute think.
| punkspider wrote:
| How long does it take to convert 2d to 3d?
|
| I found out about Kaedim a few weeks ago and when I saw this
| repo, it came to my mind as well.
| dang wrote:
| Related:
|
| https://arxiv.org/abs/2212.08751 (via
| https://news.ycombinator.com/item?id=34060986)
|
| https://techcrunch.com/2022/12/20/openai-releases-point-e-an...
| (via https://news.ycombinator.com/item?id=34069231)
|
| https://twitter.com/drjimfan/status/1605175485897625602 (via
| https://news.ycombinator.com/item?id=34068271)
|
| (but no meaningful comments at those other threads)
| pavlov wrote:
| Maybe the lack of commenter enthusiasm is because point clouds
| are fairly specialized. Most people don't have interesting
| point cloud data lying around to test this with, or the means
| to capture such data.
|
| 3D sensors are slowly but surely becoming more common. The
| iPhone Pro series has one, and AR hardware designs tend to
| include these capabilities. So this model synthesis seems a bit
| ahead of the curve, in a good way.
| JayStavis wrote:
| Agree with you that point clouds aren't mainstream at all and
| most people aren't sure what they'd use them for.
|
| I think the premise of this is text-to-3D, and that because
| it's quicker generations you don't really need anything
| besides a GPU to start playing around with it.
| uplifter wrote:
| anyone with a recent (last five years) iphone or ipad has the
| means to generate point cloud data using the depth sensors.
| speedgoose wrote:
| Do you have an app to recommend? And does it work well on
| small objects? The apps I tried were not very impressive.
| uplifter wrote:
| Sorry I don't know any store apps for such, my only
| experience is through personal corespondance/demos
| with/by developers experimenting with the hardware
| feature and sdk. Quick googling turns up some contenders
| but I can't vouch for them:
|
| https://apps.apple.com/ca/app/point-cloud-ar/id1435700044
|
| https://apps.apple.com/ca/app/point-precise/id1629822901
| 9wzYQbTYsAIc wrote:
| > Maybe the lack of commenter enthusiasm is because point
| clouds are fairly specialized.
|
| Please correct me if I am wrong.
|
| The pointcloudtomesh notebook seems to be be able to output
| something could be converted for 3d printing purposes.
|
| I haven't yet attempted to do so, but that does seem like an
| exciting and general purpose use case.
| FloatArtifact wrote:
| An alternate approach, although brewed force would be generating
| an image set using prompts and then using photogrammetry to
| convert to 3D. Either way, I'm excited for this space to grow
| both in 3D prompt generation and alternate inputs through
| scanning. There's a difference between creative and functional
| use case.
| cdcox wrote:
| Web demo for anyone interested takes about 2 minutes to run:
| https://huggingface.co/spaces/osanseviero/point-e
|
| Seems super fast, some are saying 600x faster [0], than than the
| version made off of Google's paper. But it is a little less
| accurate. Point clouds are less useful but some on Reddit and the
| authors have tools to try to convert to meshes [1][2]. It does
| feel like stable diffusion level generation of good 3d assets is
| right around the corner. It will be interesting to see which tech
| wins out, whether it's some variant of depth estimation like sd2
| and non ai tools can do, object spinning/multi angle view like
| Google's tool does, or whatever this tool does.
|
| [0]
| https://twitter.com/DrJimFan/status/1605175485897625602?t=H_...
|
| [1]
| https://www.reddit.com/r/StableDiffusion/comments/zqq1ha/ope...
|
| [2]
| https://github.com/openai/point-e/blob/main/point_e/examples...
| numpad0 wrote:
| > The main problem with mesh generation from stuff like this is
| that usually the topology is a mess and needs a lot of cleanup
| to be useuable. It's not quite so bad for static non deforming
| objects but anything that needs to be animated deforming or
| that is organic looking would likely need retopologizing by
| hand. > > That's one of the worst parts of 3D modeling so it's
| like you're getting the AI to do the fun part and leaving you
| to do all the boring cleanup process.
|
| From [1]. Seems like there is a pattern of "AI asked to
| generate final results with only final results to learn from,
| immediately asked for the apple in the picture" in AI
| generators. I suppose lack of specialization in application
| domains of NNs is a deliberate design choice for these high-
| profile projects, in a vague hope of simulating emergent
| behaviors as seen in the nature and avoiding to be another
| expert system(while being one!), but that attitude seems
| limiting usefulness, here and again.
| codetrotter wrote:
| > Web demo for anyone interested takes about 2 minutes to run:
| https://huggingface.co/spaces/osanseviero/point-e
|
| It's a fun demo. Worth to note that on mobile it didn't include
| any button to download the generated point cloud data itself,
| at least not that I could find. Might be the same on desktop
| also.
|
| Additionally I think the amount of time taken depends on the
| amount of visitors. I had to wait about 7 minutes for it to
| finish.
| speedgoose wrote:
| > We would like to thank everyone behind ChatGPT for creat-ing a
| tool that helped provide useful writing feedback.
|
| I wonder how much of the research paper is written by ChatGPT.
___________________________________________________________________
(page generated 2022-12-20 23:00 UTC)