[HN Gopher] Lensless camera creates 3D images from single exposure
       ___________________________________________________________________
        
       Lensless camera creates 3D images from single exposure
        
       Author : sizzle
       Score  : 34 points
       Date   : 2022-09-09 17:58 UTC (5 hours ago)
        
 (HTM) web link (www.optica.org)
 (TXT) w3m dump (www.optica.org)
        
       | MaxikCZ wrote:
       | Tittle: "Lensless camera"
       | 
       | First sentence: "Researchers have developed a camera that uses a
       | thin microlens array..."
        
         | hinkley wrote:
         | Did we say "no lenses"? We meant to say, "all lenses".
        
       | gurumeditations wrote:
       | Lytro?
        
         | birdman3131 wrote:
         | My thought exactly.
        
       | 4ndrewl wrote:
       | "Lensless" but actually has multiple lenses?
        
       | postalrat wrote:
       | Hard to beat the computational power of a lens.
        
         | blevin wrote:
         | Is there a concise term for the computation it is doing,
         | besides something like "convolution with a gradually varying
         | kernel"?
         | 
         | It's weird to think the real computation is physics, and
         | machines or optical systems are nested abstractions hosted on
         | that.
         | 
         | I guess one person's low-level / "metal" / "wire" is always
         | someone else's API.
        
       | frozenport wrote:
       | Highlights that academia is not the place for cutting edge work.
       | What appears to be a change in an algorithm is sold new lensless
       | camera.
        
       | drewbeck wrote:
       | From the paper:
       | 
       | " To the best of our knowledge, we are the first to demonstrate
       | deep learning data-driven 3D photorealistic reconstruction
       | without system calibration and initializations, and we are the
       | first to demonstrate imaging objects behind obstacles using
       | lensless imagers."
       | 
       | That's their actual claim, worth examining.
       | 
       | Also AFAICT their camera has no large traditional lens at all,
       | whereas previous work -- ie lytro -- put a traditional lens in
       | front of their micro lens array.
        
       | birdman3131 wrote:
       | We have had this for years. The term is Light Field Camera. There
       | was an attempt several years back to do this with the Lytro
       | camera. It worked but had a few major issues. For one the
       | resolution loss was fairly substantial. There was also a fair few
       | issues around the fact that the only way to share the photos was
       | a cumbersome vendor lock in site iirc.
       | 
       | With more modern sensors you might be able to fix the resolution
       | issue as I recall it being around 2010 or so.
        
         | kjeetgill wrote:
         | The title is misleading about the work. The real novelty here
         | is the particular deep-learning processing for the 3D
         | reconstruction.
        
         | fxtentacle wrote:
         | Agree, this appears to be a pure PR piece.
         | 
         | So they built a camera with a microlens array and
         | refocussing... Like this one, which has been commercially
         | available for 10+ years? https://raytrix.de/products/
        
           | riedel wrote:
           | I think the catch is different on this one. They seem to co-
           | optimize the optics hardware and the processing software.
           | They say that new manufacturing allows this kind of stuff.
           | Which I think is cool.Still I wonder how much difference this
           | will realistically make. In my experience (from printed
           | electronics) new customizable manufacturing processes often
           | also come at some cost and expose a lot of new effects, so is
           | difficult to break even with something that keeps one side
           | stable. Although particularly in optics new smartphone lenses
           | I guess have proven how good you can get with such
           | approaches.
        
         | GloriousKoji wrote:
         | Resolution will always be an issue since it's a trade off of
         | dedicating a fixed area to single 2D plane or a fraction of it
         | to multiple planes in 3D.
        
           | jbay808 wrote:
           | That's true, but it might be mainly a cost issue. You can
           | make up for the resolution loss by using a much larger
           | sensor; since the microlenses are very thin, it won't
           | increase the weight or bulk of the device very much. Just the
           | cost.
        
             | aaroninsf wrote:
             | This.
             | 
             | The overview of the 48 megapixel camera on the iPhone 14
             | talked chipperly about the fact that the user-land
             | resolution is already decimated (well, quadrimated?) 4 to 1
             | to 12mp images, with the subtext "but those are highly
             | determined optimized pixels!"
        
               | wyager wrote:
               | The 48mp sensor on the iPhone seems like a gimmick - even
               | with purely diffraction-limited optics, I doubt you could
               | seriously increase optical bandwidth at that sensor size
               | beyond, say, 10-20Mp, to say nothing of aberration
               | limiting and shot noise constraints.
               | 
               | Just now we're starting to see 40Mp APS-C sensors, and
               | people are debating if it's worth it to do that on a
               | sensor that's like 10x larger than the biggest iphone
               | sensor.
        
               | foldr wrote:
               | Some simple calculations suggest otherwise. The iPhone 14
               | Pro 48MP camera has a pixel size of 1220 nanometers. The
               | diameter of the Airy disc at f1.78 for 500nm light is
               | 2170 nanometers. Seems like Apple has designed things so
               | that they're just on the right side of the Rayleigh
               | criterion.
               | 
               | The Rayleigh criterion is admittedly pretty lax from the
               | point of view of imaging. People often talk about digital
               | cameras being "diffraction limited" if the diameter of
               | the Airy disc is larger than the dimension of a pixel
               | (which is clearly the case for the iPhone). However,
               | AFAIK, there is no strict limit here. It's something of
               | an open question how good smart sharpening algorithms can
               | get - especially when they're allowed to 'cheat' by using
               | knowledge of the statistical properties of typical real
               | world scenes.
               | 
               | The pixel size for a 48MP APS-C sensor is about 2900
               | nanometers, so it's not 10x bigger (even if you compare
               | area).
        
               | wyager wrote:
               | > It's something of an open question how good smart
               | sharpening algorithms can get
               | 
               | True, but you're strictly limited by what I assume are
               | microscopic gate capacitances on pixels that small. The
               | maximum number of electrons in an iphone pixel are
               | probably less than 10k, leading to less than 7 bits of
               | maximum theoretical entropy per px from shot noise
               | limits. This is going to sharply curtail (no pun
               | intended) your ability to "back out" airy disk overlap.
        
           | nico wrote:
           | I wonder if something like DAIN[1] could be used to take
           | fewer planes and then interpolate. Maybe that way the
           | tradeoff wouldn't be as bad?
           | 
           | 1: https://twitter.com/karenxcheng/status/1564636065410953217
        
         | sp332 wrote:
         | The article says this is an advance in the speed of processing
         | light field images. Also the bit about seeing around an opaque
         | object, which I'm not sure what that means but could be an
         | interesting first?
        
           | klodolph wrote:
           | > Also the bit about seeing around an opaque object, which
           | I'm not sure what that means but could be an interesting
           | first?
           | 
           | It's just a property of light field cameras.
           | 
           | Technically, other cameras can do it to some extent too. If
           | there's some opaque object with another object behind it,
           | 
           | 1. If you focus on the front object, you'll see the object
           | behind it--blurred.
           | 
           | 2. If you focus on the back object, you'll be able to see the
           | back object behind the blur of the front object.
           | 
           | With a light field camera, you can essentially "cut away" the
           | blurry parts and get them in focus. You're not able to see
           | anything new, you're just able to see things in the original
           | image more clearly.
        
             | sp332 wrote:
             | Sure, but that's not new. I was wondering if they did
             | something extra.
        
           | grumbel wrote:
           | Applied Science has a good video[1] showing how you can look
           | around objects. He is using a regular lens there, but with a
           | lightfield camera you can emulate basically any lens you want
           | in software as long as the captured lightfield is big enough.
           | 
           | [1] https://www.youtube.com/watch?v=iJ4yL6kaV1A
        
         | tgflynn wrote:
         | What is the basic principle behind this approach. A lens
         | essentially allows you to map light rays (which have an origin
         | and a direction vector) to position on the sensor (a pinhole
         | does the same but lets through much less light).
         | 
         | Did Lytro have a sensor that could actually measure both the
         | position and direction of an incident light ray ?
        
           | jbay808 wrote:
           | You more or less described it. If you put a microlens over,
           | say, each 3x3 pixel sub-array, then each of those 3x3 squares
           | now captures a locally incident light ray and its direction
           | (with low resolution). You can now treat that as a single
           | image pixel, but now containing angular information
           | describing how that pixel changes with rotation in the field
           | of view. Effectively you end up with an array of tiny
           | cameras.
           | 
           | Applying the same principle in reverse, you can also create a
           | 3D display by adding a microlens array on top of an image. If
           | you use the same microlens array to capture the image as to
           | display it (swapping out the sensor with a display, or a
           | camera film with its developed photograph) then optical
           | physics does all the necessary image processing for you.
           | 
           | The idea goes back at least as far as Gabriel Lippmann in the
           | 1900s: https://en.wikipedia.org/wiki/Lenticular_printing#Gabr
           | iel_Li...
        
           | NovemberWhiskey wrote:
           | IIRC there was a micro-lens array between the objective lens
           | and the sensor; each of the micro lenses added a separate,
           | overlapping contribution on the sensor and with a bunch of
           | signal processing magic you somehow back out the
           | position/direction.
           | 
           | Reading the article, I don't at all see what the difference
           | is between the Lytro camera and what's described there -
           | although I'm not an optics expert.
        
             | tgflynn wrote:
             | One difference may be that, as I recall, Lytro was focused
             | on (2D) photography, while this seems to be targeting 3D,
             | which may be a much more compelling application.
        
               | NovemberWhiskey wrote:
               | I don't know - it seems pretty much the same - the whole
               | point of the Lytro camera was that you had sufficient
               | information in the recorded image to select your point of
               | focus and depth in post-processing, which is essentially
               | the same as having depth data.
        
               | tgflynn wrote:
               | The same information is used but there's a big difference
               | between using it to make prettier photos versus
               | reconstructing 3D scenes. The second application domain
               | seems to have much more potential for revolutionary
               | change than the first (to me at least).
               | 
               | The average person, or probably even the average VC,
               | isn't going to read a pitch about being able to take
               | photos without having to adjust lens focus and say "gee,
               | with that technology you could build a 3D model as easily
               | as taking a photo".
        
         | frozenport wrote:
         | Biggest problem was motivation, you could instead capture the
         | whole image and apply a digital blurring filter.
        
         | nradov wrote:
         | The Lytro camera allowed you to choose how to focus the image
         | _after_ taking the exposure. So naturally the only way to do
         | that was with their proprietary software because there was no
         | suitable industry standard. It did then allow for exporting in
         | standard formats like JPEG.
         | 
         | They tried to market to sports and nature photographers since
         | it eliminated the need to focus before taking a picture. Thus
         | they would have a better chance of capturing the perfect image
         | at exactly the right instant. Unfortunately, after doing the
         | software focus post-processing the images always looked a
         | little soft. And the autofocus features on DSLRs got faster and
         | more accurate. So there was no market.
        
           | nomel wrote:
           | > The Lytro camera allowed you to choose how to focus the
           | image after taking the exposure.
           | 
           | This isn't technically correct. Everything was in focus,
           | because the aperture was so small. The software would add an
           | artificial blur, using the 3d models generated.
           | 
           | So, you chose how to artificially blur the image, after
           | taking the exposure.
        
         | cryptonector wrote:
         | > There was also a fair few issues around the fact that the
         | only way to share the photos was a cumbersome vendor lock in
         | site iirc.
         | 
         | That's... not how you make a new product popular and make tons
         | of money. That's how you make a new product a niche product.
         | That's how you kill a product.
         | 
         | Vendor lock-in is for when you have already acquired the
         | mindshare. Users _hate_ vendor lock-in.
         | 
         | To start out having vendor lock-in baked in you'll have to have
         | such an incredibly amazing and useful new product -one that's
         | in its own category- that users will disregard the lock-in for
         | having no choice. The Lytro camera is not such an amazing
         | product.
         | 
         | Mindshare is extremely valuable. Vendor lock-in is a tool for
         | when your competition is rising, but it's very risky, as it may
         | close your product to new customers while making your existing
         | customers want to jump ship as soon as practicable. Vendor
         | lock-in is very risky not just for the customer but also for
         | the vendor.
        
       | sp332 wrote:
       | Why does "lensless" mean "has a bunch of lenses"?
        
         | pavlov wrote:
         | It's the same logic as "serverless"?
        
         | jfengel wrote:
         | Because it's a very different concept from a conventional
         | camera lens. It does not produce a focused image. Each lens
         | corresponds to a single sensor, which is manipulated to receive
         | light from multiple angles.
         | 
         | A regular lens manipulates the light so that each sensor
         | corresponds to one pixel, mimicking the way a piece of film
         | would produce a color at each spot.
         | 
         | With a light field camera the actual image has to be
         | constructed computationally afterwards. In effect, the software
         | computes the effect a lens would have had on the light --
         | meaning you can set the focus later.
         | 
         | There is a lens-like apparatus built into each sensor to allow
         | it to gather the types of data it needs but it's not at all
         | like a conventional camera lens. You couldn't put on a
         | conventional camera lens, and none of the things that a
         | photographer uses a "lens" for are necesary. "Lensless" seems
         | as good a word as any for that: it accurately conveys a camera
         | that you do not and cannot focus, and has no object
         | corresponding to the lens.
        
           | nomel wrote:
           | > It does not produce a focused image
           | 
           | I don't think this is technically correct. There are _many_
           | focused images being produced:
           | https://www.youtube.com/watch?v=rEMP3XEgnws
           | 
           | > but it's not at all like a conventional camera lens
           | 
           | The physics is _exactly_ the same as a conventional lens,
           | because each micro lens is conventional lenses.
           | 
           | It's all conventional, up to the point of having to separate
           | all of the _real, focused,_ images to make one clear image.
           | 
           | For proof, if you block all the micro lenses, except one, you
           | would see a boring, focused, real image projected onto the
           | sensor. In fact, the original light field cameras, that you
           | could buy off the shelf, had a single, separate, image per
           | microlens. Looking at the image in this press release, it
           | looks the same.
        
         | [deleted]
        
         | zakki wrote:
         | i think the naming got infected from a computer virus named
         | serverless.
        
         | jaclaz wrote:
         | >"We consider our camera lensless because it replaces the bulk
         | lenses used in conventional cameras with a thin, lightweight
         | microlens array made of flexible polymer," said research team
         | leader Weijian Yang from the University of California, Davis.
         | 
         | Opinions are opinions, peronally I would consider it multi-lens
         | or poly-lens.
        
           | hackernewds wrote:
           | Multi-lens would imply more lenses. Which wouldn't get the
           | same amount of attention or marketability
        
         | anigbrowl wrote:
         | 'compound lens camera' would be better. Much of this work has
         | roots in the study of insect vision.
        
       | fortran77 wrote:
       | Except it's not lensless. It has an array of micro lenses.
        
       | whycombinetor wrote:
       | This technique wouldn't be novel even if it _was_ lensless.
       | Google search "lensfree holography" for plenty of existing
       | literature.
        
         | nomel wrote:
         | Or you can just go buy one: https://raytrix.de/
         | 
         | The Lytro camera was on the market in 2014.
        
       ___________________________________________________________________
       (page generated 2022-09-09 23:01 UTC)