[HN Gopher] Colorize Lidar point clouds with camera images
___________________________________________________________________
Colorize Lidar point clouds with camera images
Author : shikhardevgupta
Score : 35 points
Date : 2024-08-17 08:36 UTC (14 hours ago)
(HTM) web link (medium.com)
(TXT) w3m dump (medium.com)
| shikhardevgupta wrote:
| Lidars are pretty powerful, but one big disadvantage of using
| point clouds for perception is that they are not colored. This
| makes identifying objects more difficult compared to camera
| images. However, by combining camera images with lidar data, we
| can enhance the point cloud by assigning colors to the points
| based on the corresponding camera image pixels. This makes
| visualizing and processing the point cloud much easier.
| polemic wrote:
| > _one big disadvantage of using point clouds for perception is
| that they are not colored_
|
| That depends entirely on the capture device.
| Groxx wrote:
| I'm not sure why you've just restated the first paragraph of
| the article.
| outofpaper wrote:
| Likely for the engagement... any bets as to if they are a bot
| or not?
| PabloRobles wrote:
| Shameless plug, but I work on a multispectral lidar that does
| produce "colored" point clouds in the SWIR [0].
|
| It is pretty cool, we use it for detection of humidity degree
| or for species discrimination (e.g. plants, minerals,
| chemicals...).
|
| [0]: https://www.iridesense.tech/
| W0lf wrote:
| I did work on this as part of my thesis quite a few years back at
| the university. One other optimization would be to process the
| points in parallel.
|
| Regarding the coloring of each 3d point, it might be feasible to
| not use one camera image, but a weighted sum of all camera images
| that can see the same point in the scene. Each pixel color is
| then weighted with the scalar product of the points normal and
| the viewing direction of the camera. This would also regard for
| noise and specular reflections (which can mess up the original
| color).
| f0ti wrote:
| Have been doing something similar to this using image to image
| translation (XYZ rendered images to RGB space domain). Most of
| the information is contained in the Z-axis which gives you the
| height information, e.g. helps to distinguish the grass and
| buildings color. However I am skeptical if the X and Y is noise
| and how much spatial information it provides during Conv blocks.
| Anyone who had previous experience on this?
|
| https://github.com/f0ti/thesis
| W0lf wrote:
| As pointed out in my other comment, using a single image for
| point coloring is prone to errors due to noise, specular
| reflection and occlusion. I'd consider using a (normalized)
| cross-correlation approach with several images.
| wsitch wrote:
| Check out https://m.youtube.com/watch?v=OjyxFGmcu74
| KaiserPro wrote:
| Lidars are expensive, if you want spare point clouds, that are
| not quite real time you might want to check out colmap
| https://colmap.github.io/
| ghayes wrote:
| Thanks, been trying to look into AI tools to generate point
| clouds from photos for a hobby robot. Crazy that a mediocre
| LIDAR costs more than every other part of the robot combined,
| maybe times 10.
| crtified wrote:
| Would an accurate ELI5 of this be :
|
| * Mathematically align the photograph and the lidar point cloud.
|
| * For each photograph pixel, colour whichever aligned lidar point
| is closest to the camera.
|
| So you end up with one coloured lidar point per photograph pixel?
| stargrazer wrote:
| Isn't there some math which crosses over between what Lidar is
| showing vs what photogrammetry provides from overlapping
| photograph images -> providing depth corrected/adjusted/ground-
| truthing of images?
___________________________________________________________________
(page generated 2024-08-17 23:00 UTC)