[HN Gopher] Nikon reveals a lens that captures wide and telephot...
       ___________________________________________________________________
        
       Nikon reveals a lens that captures wide and telephoto images
       simultaneously
        
       Author : giuliomagnifico
       Score  : 166 points
       Date   : 2024-12-26 14:39 UTC (4 days ago)
        
 (HTM) web link (www.digitalcameraworld.com)
 (TXT) w3m dump (www.digitalcameraworld.com)
        
       | Neywiny wrote:
       | Despite a number of what look like copy paste articles, I see no
       | actual pictures of what the pictures taken look like. Maybe we'll
       | see at CES but until then these feel like clickbait.
        
         | kmlx wrote:
         | official link:
         | https://www.nikon.com/company/news/2024/1219_01.html
         | 
         | still no photos, but they say more info will come during CES
        
           | londons_explore wrote:
           | I assume there are no photos because the actual images on the
           | sensor will look like gibberish - it'll effectively be two
           | different images overlaid, and look a total mess.
           | 
           | However, feed that mess into AI, and it might successfully be
           | able to use it to see both wide and far.
        
             | idlerig wrote:
             | If CAT scan imagery exists, I suppose this sort of image
             | processing shouldn't be impossible to do-- though I readily
             | admit I have no idea what the logic behind it might look
             | like without some sort of wavelength-based filtering that
             | would make a photographer shudder.
        
               | ska wrote:
               | Tomographic reconstruction is in principal pretty
               | straightforward (Radon transform). MRI is much (much),
               | fwiw, though RF not optics.
               | 
               | I don't think wavelength filtering will help you here, as
               | you don't control the input at all. Some sort of splitter
               | in the optical chain might, but you'd be halving your
               | imaging photons with all that entails. Or you can have
               | e.g. a telephoto center and a wide fringe. It's an
               | interesting idea.
        
         | kylebenzle wrote:
         | You are right.
        
         | dannyw wrote:
         | I mean, a lens is only a third of the camera, the other two
         | being the sensor, and the ISP. A lens doesn't produce a photo
         | in itself.
         | 
         | This would be like someone announcing a new RAM innovation --
         | and people asking what its Cinebench or Geekbench score is.
        
           | pietro72ohboy wrote:
           | A lens is a BIG part of the final image you get. So much so
           | that the common advice in most photography forums is that
           | within a price gap, buy the best lens you can find and an
           | okay camera. Camera tech, especially in large dedicated full-
           | frame and APS-C units, has plateaued since 2018, and most
           | cameras from that period take exceptionally good pictures,
           | even by today's standards. Thus, lens availability, price,
           | and quality, as well as AF tracking, are what fundamentally
           | differentiate modern cameras.
           | 
           | EDIT: I got pulled into the discussion without reading the
           | article. The lens is for industrial uses.
        
             | pvaldes wrote:
             | I would expect a big photo with an ok resolution including
             | inside an area of much higher resolution (aka teleobjective
             | part). That special area can be cropped later to obtain a
             | much bigger photo with all the detail than a tele would
             | bring.
        
             | throwanem wrote:
             | You're missing that this is not designed as a tool for
             | photographers, but rather in a collaboration with
             | Mitsubishi aimed at better situational awareness for
             | vehicle operators. The headline doesn't mention this, but
             | it's impossible to miss in the article.
        
               | haswell wrote:
               | In the context of the GP, I think the point still stands
               | though which is roughly: "the lens matters a lot".
               | 
               | Without knowing more about the optics, it's hard to know
               | how much of a role the sensor/ISP play in the innovation,
               | but those are well established and widely capable across
               | both photographic and industrial use cases.
               | 
               | Very curious to eventually learn more about this and
               | whether it might eventually find its way into traditional
               | cameras.
        
               | throwanem wrote:
               | Sure, I guess. But the whole discussion is so void of
               | subject matter knowledge that it's like trying to argue
               | the pros and cons of different bowling balls in terms of
               | how well they pair with Brie.
               | 
               | Nikon is an optics company that's also made cameras for a
               | long time, and then very nearly didn't; before the Z
               | mirrorless line took off, the company's future as a
               | camera manufacturer was seriously in doubt. But even a
               | Nikon that had stopped making cameras entirely after the
               | D780 would still be an optics company. There is no
               | serious reason to assume the necessity of some sensor/ISP
               | "special sauce" behind the novel optics announced here to
               | make the system work. And considering where Nikon's
               | sensors actually come from, if there were more than novel
               | optics involved here, I'd expect to see Sony also
               | mentioned in the partnership.
               | 
               | Of course that's not to say photographic art can't be
               | made with commercial or industrial equipment; film
               | hipsters notwithstanding, pictorialism in the digital era
               | has never been more lively. But I would expect this to
               | fall much in that same genre of "check out this wild shit
               | I did with a junkyard/eBay/security system installer
               | buddy find", rather than anything you'd expect to see on
               | the other end of a lens barrel from a Z-mount flange.
        
               | jfengel wrote:
               | I couldn't tell from the article: is it for human
               | eyeballs or for computers?
               | 
               | If it's for eyeballs it would be nifty to know what kind
               | of image displays both kinds of information at once.
               | 
               | If it's for computers, what is the advantage over two
               | cameras right next to each other? Less hardware? More
               | accurate image recognition? Something else?
        
               | throwanem wrote:
               | These are questions for their CES presentation next week,
               | not for me.
        
             | sunnybeetroot wrote:
             | I recommend reading the article if you haven't already as
             | it mentions this is for vehicles, there isn't a mention of
             | photographers.
        
             | 4ad wrote:
             | This is not a lens for photographers, it's an industrial
             | piece of technology...
        
           | surfingdino wrote:
           | Technically speaking you do not need a lens to capture an
           | image (see pinhole cameras) BUT for most applications a lens
           | is a necessity. It is the first part of the image capture
           | pipeline and has a huge influence over the final image.
        
           | fragmede wrote:
           | aren't there some cameras that have swappable lenses? Does
           | Nikon know anything about those?
        
             | alistairSH wrote:
             | Yes. And yes. What's your point? The use case mentioned is
             | AI/driver assist for vehicles.
        
         | ChrisMarshallNY wrote:
         | This is Nikon (and Mitsubishi). It will really work, but they
         | hold their cards close to their chest, and embargo sample
         | images (they don't like low-quality images getting out). They
         | probably plan something splashy for CES.
         | 
         | CES should be interesting, this year.
        
         | arghwhat wrote:
         | The intended output is, in essence, a wide-angle photo but with
         | much greater detail in the center as if that portion had been
         | taken by a telephoto lens and placed on top like smartphones do
         | - but with no offset like smartphones normally deal with.
         | 
         | The processed result would be quite uninteresting to look at:
         | it's a wide-angle photo where the outer portion is much more
         | blurry than the center. Considering the current intended
         | application, the picture probably would probably be pretty
         | mediocre.
         | 
         | This is very specific technology that solves very specific
         | problems (and might one day make its way to smartphones), but
         | not something I'd expect to produce glamor shots right now.
        
       | tyho wrote:
       | I'm imaging a variable focal length across the image plane,
       | decreasing as the distance to the centre increases.
        
         | RicoElectrico wrote:
         | I imagine two optical paths that coincide at the output. But
         | who knows, no details are provided, so it's only speculation.
        
       | CodeCompost wrote:
       | Let me guess, it's going to be "AI assisted"
        
         | bilinguliar wrote:
         | They started the development in 2020, so it may still be a
         | blockchain.
        
       | jeffreygoesto wrote:
       | It must be a lens that smoothly varies focal length depending on
       | the distance from the center. You want pedestrians and bikes on
       | the vicinity, as wide angle as possible and cars far away for
       | emergency braking and ACC in the center of the image when going
       | straight on a highway.
        
         | mcdeltat wrote:
         | This is fascinating, I didn't think that was possible.
         | 
         | What would be the optimal sensor geometry for such a lens? The
         | distortion would be crazy, wouldn't it? Nowhere near a
         | rectilinear projection.
        
         | zokier wrote:
         | I don't know if it needs to be smoothly varying instead of
         | having just two zones with a step between them.
         | 
         | Either way, I'd expect it to need significant amount of
         | processing to get anything useful.
        
       | mikewarot wrote:
       | I'm guessing it's a beam splitting prism with 2 paths, one for
       | wide angle, and the other for telephoto. They used to make
       | cameras with 3 ccds for color video.
        
         | ruined wrote:
         | the press release specifies there's no offset/parallax which
         | would rule this out
        
           | IshKebab wrote:
           | No it wouldn't?
        
           | tobyhinloopen wrote:
           | no because the light enters the same lens
        
         | londons_explore wrote:
         | I'm guessing it's a regular camera lens, _surrounded_ by a wide
         | angle lens which is donut-shaped.
         | 
         | Both images end up superimposed on the sensor, and there is
         | probably a lot of distortion too, but for AI that might not be
         | an issue.
        
           | 0_____0 wrote:
           | Needlessly complex, and machine vision camera users don't
           | like the ambiguity that comes with ML processing on the
           | frontend of their own stuff.
        
             | rurban wrote:
             | And I would have no idea how to calibrate it.
             | 
             | If it produces an EXR with clearly seperate images with
             | different lenses, fine. Like a 3D EXR with left and right.
        
               | londons_explore wrote:
               | Depending on the lens production process, the
               | relationship between the wide angle and regular angle
               | might be fully defined (ie. you don't need to calibrate
               | it, you can just read the transformation matrices off the
               | datasheet and it's gonna be correct to within 0.1
               | pixels).
        
             | londons_explore wrote:
             | True, but if you're going for frontend ML, which is
             | effectively a black box anyway, you might as well have some
             | non-human-understandable bits in the optics and hardware
             | too.
             | 
             | Various designs for microlens arrays do similar things -
             | thousands of of 0.001 megapixel images from slightly
             | different angles are fairly useless for most human uses,
             | but to AI it could be a very powerful way to get depth
             | info, cut the camera thickness by 10x, and have infinite
             | depth of focus.
        
               | 0_____0 wrote:
               | not sure how you took the idea "we want wide and narrow
               | views of the same perspective" and thought building a
               | light field camera might be a practical approach
        
               | mikewarot wrote:
               | While it is possible to build a consumer lightfield
               | camera (Lytro was one example), they aren't as magical as
               | you might think until you get much larger lens sizes than
               | people are going to tolerate to get appreciable zoom
               | range.
               | 
               | I did a bunch of manual creation of light-field photos
               | over the years.[1] To get interesting compositions, you
               | need an effective lens diameter of about 30 cm in
               | diameter or more. To get super-resolution good enough for
               | zoom, you're going to probably need something that size.
               | 
               | [1] https://www.flickr.com/photos/---mike---/albums/72177
               | 7202979...
        
               | 0_____0 wrote:
               | Not disputing the feasibility of light field imaging.
               | That approach really doesn't do anything for the use case
               | the camera Nikon/Mitsubishi are showcasing. Light field
               | cameras have low resolution for their sensor sizes, lower
               | optical efficiency, are expensive to manufacture, require
               | processing that would make this a bad fit for the near-
               | realtime ADAS functions you need for automotive machine
               | vision, and have no advantage when it comes to favoring
               | one part of the image in terms of angular resolution.
               | 
               | Like, why even mention them?
        
         | 0_____0 wrote:
         | From my time with optics this seems the most likely setup.
         | Getting good optical efficiency out of the setup might take
         | some cleverness though - ideally you're not dumping loads of
         | light that are "out of frame" for one sensor.
         | 
         | The other option is concentric optics with a pick out mirror
         | for the central light path. Bit harder to make but you get more
         | flexibility re: how much of your light collecting area gets
         | split to which sensor.
        
       | Anotheroneagain wrote:
       | This has no future with 200mpx sensors becoming common.
        
         | ruined wrote:
         | 200mpx sensors have an incredible future with this becoming
         | available
        
           | tobyhinloopen wrote:
           | You'd need some incredible amount of processing power to
           | process all that data, and it also assumes the lens is even
           | "sharp enough" to capture that resolution.
           | 
           | Even high-end full frame lenses + sensors with a fixed focal
           | length struggle to reach 200MP of detail. (60MP Sony A7RV
           | with pixel shift can take pictures with 240MP). No way this
           | weird monstrosity can get anywhere near 200MP in a moving
           | vehicle.
        
         | fredwu wrote:
         | Cropping isn't the same as capturing at a different focal
         | length.
        
           | wvbdmp wrote:
           | Isn't it optically? Ignoring lens imperfections and assuming
           | infinite resolution, you should get the same image cropping
           | vs. equivalent focal length, no?
        
             | tobyhinloopen wrote:
             | I think it does, yes. Cropping 25% of the center of a 35mm
             | F/2.0, you'd get the equivalent of a 70mm F/4.0, but with
             | only 25% of the pixels obviously.
        
               | GiovanniP wrote:
               | I expect depth of focus to be different.
        
               | Kubuxu wrote:
               | Yes, depth of focus will be larger, as signified by the
               | larger f-number.
        
               | tobyhinloopen wrote:
               | It will not, I specifically included the F-stops for that
               | reason.
               | 
               | The depth of field is determined by the focus distance
               | and the aperture of the lens. Both remain unchanged.
               | 
               | Note that 35mm F/2.0 is the same aperture as 70mm F/4.0.
               | Both lenses have an aperture of 17.5mm. (35/2.0 ==
               | 70/4.0)
               | 
               | You can easily verify this with your favorite zoom lens.
               | If you have an 24-70 F/2.8 available to you, you can
               | verify by taking 2 pictures; one at 35mm F/2.8 and one at
               | 70mm F/5.6. Crop the 35mm one to 25% area (half the
               | width, half the height). Render both images to the same
               | size (print, fill screen, whatever) and see for yourself.
        
             | GiovanniP wrote:
             | > assuming infinite resolution
             | 
             | this is an assumption that goes against the concept of
             | "f-number" so if one does it, they should not expect to get
             | to anything sensible.
        
               | wvbdmp wrote:
               | I just meant sensor pixels, because you're obviously
               | losing those when cropping, but you get the same
               | perspective as from larger focal length (since you're not
               | moving).
        
               | GiovanniP wrote:
               | I agree that the images correspond to the same region in
               | object space. Further assumptions on optical resolution
               | don't work well, as the optical resolution _depends_ on
               | the f-number.
        
               | Anotheroneagain wrote:
               | The angular resolution depends purely on the aperture
               | diameter, not the f-number. There should be no difference
               | between capturing the image in high resolution, and
               | blowing it up for a lower resolution sensor. All that
               | should be needed is a 200mpx sensor that can output the
               | entire frame in 12mpx, and 12mpx of the central area in
               | full resolution. It's similar to how our eyes work.
        
             | mcdeltat wrote:
             | I think it's not the same. Changing focal length changes
             | the perspective warping, right? That's why fisheye lenses
             | look crazy, and telephoto lenses "compress" depth. This
             | might be a function of the sensor geometry too, though.
        
               | john2x wrote:
               | Cropping the centre of a fisheye photo will look the same
               | as a normal or telephoto lens if they are taken at the
               | same distance (the crop will have less resolution of
               | course)
        
               | mcdeltat wrote:
               | After looking it up, yes you are right, they are the
               | same. I was thinking of changing the distance to subject
               | instead.
        
               | Anotheroneagain wrote:
               | Fisheye lenses look crazy because they are deliberately
               | made that way. Rectilinear lenses don't do it.
        
       | Torkel wrote:
       | So it's a fisheye lens?
       | 
       | If you plot pixels per degree over the field of view of a fisheye
       | lens you will see that vastly more pixels are dedicated to the
       | center "eye". And also the field of view is large. Which is what
       | this novel lens claims to also do.
        
         | Onavo wrote:
         | The fisheye transform is destructive though. Reversing it is a
         | probabilistic process (not really a big problem now with
         | generative ML but still)
        
           | nuccy wrote:
           | No it is not destructive, math-wise the transformation is
           | bidirectional and can be used many times without any detail
           | losses. The problem is sampling by the image sensor, some
           | pixels endup with larger fiel-of-view than others, so
           | reconstructed flat image of fractions of the fisheye would
           | have different sharpness over the frame.
        
           | hengheng wrote:
           | I wouldn't want generative ML to infest my car's safety
           | features tbh.
           | 
           | Fortunately you are wrong.
        
         | vouaobrasil wrote:
         | No, a fisheye is still just a very wide lens with a single
         | focal length. This lens claims to have two focal lengths.
        
         | michaelt wrote:
         | It might be like that - but there are other options as well.
         | 
         | There are companies that make stereo lenses, capturing two
         | images side-by-side on a single sensor, for people who want to
         | take 3D photos on their interchangeable-lens cameras. And there
         | are "anamorphic" lenses that squeeze things horizontally but
         | not vertically - in digital terms, producing non-square pixels.
         | Very popular in films in the 70s and 80s. And when it comes to
         | corrective glasses, bifocal and varifocal/progressive lenses
         | are another common type of lens providing variable optical
         | properties.
         | 
         | Self-driving cars need to deal with both "stopped at a
         | crosswalk, are there pedestrians?" (which needs a wide view)
         | and "driving at 70mph, stopping distance about 300 feet, what's
         | that thing 300 feet away?" (which needs a zoomed in view)
         | 
         | If you look at https://www.pexels.com/photo/city-street-in-
         | fisheye-16209078... for example - it's wide (which is good) but
         | the details at 300 feet ahead aren't winning any prizes. Far
         | more pixels are wasted on useless sky than are used on the road
         | ahead.
        
           | dan-robertson wrote:
           | Side by side seems unlikely in this as they claim both lenses
           | have the same optical axis. But good to mention in the
           | overview you give here.
        
       | usrusr wrote:
       | On an only slightly related note, I'd be happy if the same was
       | available on smartphones, _in software_ : my mobile photography
       | is of the school "take a lot and discard almost as many" and
       | having to choose between the different lens/sensor pairs ahead of
       | snap is entirely alien to that process. So the camera software is
       | forever set to that main lense and all the other ones are just
       | dead weight in my pocket (and stuff manufacturers don't allow me
       | to not buy when I need a new phone, preferably one with a good
       | main camera)
       | 
       | I think I understand that the precessor would not be able to read
       | out the sensors at the same time, but time-multiplexed bracketing
       | has been done before, it really should not be too hard or weird
       | to apply that concept to multiple sensors? (some sensors with
       | integrated memory might even be able to do concurrent
       | capture/deferred readout?)
        
         | disillusioned wrote:
         | This feels like the kind of thing that's so obvious it's hard
         | to believe it isn't being pursued... Google could, for example,
         | make a _huge_ splash with the Pixel 10 by presenting this with
         | the option of after-the-fact optical zoom or wide angle shots,
         | or using the multiple lenses for some fancy fusing for
         | additional detail. And to your point, DSLRs have been doing
         | deferred readout in the sense of storing to an on-device cache
         | before writing out to the SD card while waiting for previous
         | frames to complete their write ops... this same sort of concept
         | should be able to apply here.
         | 
         | I don't know much more about the computational photography
         | pipeline, but I imagine there might be some tricky bits around
         | focusing across multiple lenses simultaneously, around managing
         | the slight off-axis offset (though that feels more trivial
         | nowadays), and, as you say, around reading from the sensors
         | into memory, but then also how to practically merge or not-
         | merge the various shots. Google already does this with stacked
         | photos that include, say, a computationally blurred/portrait
         | shot alongside the primary sensor capture before that
         | processing was done, so the bones are there for something
         | similar... but to really take advantage of it would likely
         | require some more work.
         | 
         | But this is all by way of saying, this would be really really
         | cool and would open up a lot of potential opportunities.
        
           | forkerenok wrote:
           | > Google could, for example, make a _huge_ splash with the
           | Pixel 10 by presenting this with the option of after-the-fact
           | optical zoom...
           | 
           | Pardon my ignorance, but isn't this just an inferior version
           | of after-the-fact photo capture?
           | 
           | On a serious note, what do you really mean by this? I have
           | trouble imagining how that would work.
        
           | klausa wrote:
           | The biggest issue with doing this, for most people, is that
           | now each of your photos is 3x time the size and they need to
           | spend more on their phones and/or cloud storage.
        
             | photorank wrote:
             | What we really need is a better way of paring down the N
             | photos you take of a given subject into the one or two
             | ideal lens*adjustments*cropping tuples.
             | 
             | I'm imagining you open a "photo session" when you open the
             | camera, and all photos taken in that session are grouped
             | together. Later, you can go into each session and some AI
             | or whatever spits out a handful of top edits for you to
             | consider, and you delete the rest.
             | 
             | Use case is for taking photos with children or another
             | animals where you need approx 50 photos to get one where
             | they're looking at the camera with their eyes open and a
             | smile, then today you need to manually perform some
             | atrocious O(N*K) procedure to get the best K of the N
             | photos.
        
               | usrusr wrote:
               | Even a simple "A vs B" selection process would be an
               | improvement to what most of what we spray'n'pray
               | photographers currently use. It's been forever on my list
               | of mobile apps I might want to write (I expect that
               | similar things exists, but I also kind of expect that
               | they are all filled with other features that I really
               | would not want to have)
        
             | p1mrx wrote:
             | What if you make an AVIF image sequence, with the zoomed
             | photo followed by the wide angle photo? Presumably AV1 is
             | smart enough to compress the second based on the first.
        
         | eichin wrote:
         | If the Light L16
         | https://www.theverge.com/circuitbreaker/2018/4/10/17218758/l...
         | hadn't failed, you might have seen this (as the "weaker but
         | still useful" version of the tech spread down-market.)
         | 
         | But I'm not sure what you mean by "choosing lens/sensor pairs"
         | - do any modern phones even expose that? The samsung version is
         | just "you zoom. Occasionally the field of view jumps awkwardly
         | sideways because it switched lenses."
        
           | usrusr wrote:
           | But zooming _is_ lens selection (if implemented like that). I
           | don 't want to be bothered with zooming. A full frame of each
           | sensor, pick and crop later.
           | 
           | I want to point the side with the cameras at whatever I want
           | to take a picture of, hit the release button and move on. All
           | that careful framing? That might be enjoyable to do ahead of
           | time if you take pictures of carefully arranged flower
           | bouquets or something like that, but that's very far from
           | many of the use cases of the always-at-hand camera. Give me
           | ultrawide, tele and whatever exists in between at a single
           | press of a button and let me go back to whatever I was doing
           | while the desire to persist a visual situation came up.
           | Memory is cheap, selection is a chore but one that can be
           | done time-shifted. I'm not suggesting to take away your ahead
           | of time framing, I'm just longing for an option to simply
           | read them all (sequentially, if necessary). Perhaps even
           | throw in a readout of the selfie-cam for good measure.
        
         | Uncorrelated wrote:
         | iPhones can do this. They support taking photos simultaneously
         | from the two or three cameras on the back; the cameras are
         | hardware-synchronized and automatically match their settings to
         | provide similar outputs. The catch is you need a third-party
         | app to access it, and you'll end up with two or three separate
         | photos per shot which you'll have to manage yourself. You also
         | won't get manual controls over white balance, focus, or ISO,
         | and you can't shoot in RAW or ProRAW.
         | 
         | There are probably a good number of camera apps that support
         | this mode; two I know of are ProCam 8 and Camera M.
        
       | deskr wrote:
       | A picture is worth a thousand words. Yet, in an article about a
       | camera lens, there isn't a single picture from it.
        
         | bayindirh wrote:
         | Maybe because that's the most exciting and revealing part of
         | the tech?
         | 
         | Or maybe the output is boring. i.e. two different output
         | streams with acceptably sharp image with well controlled
         | distortion.
        
         | surfingdino wrote:
         | There're only 312 words in that article. Clearly not worth even
         | one picture.
        
       | interludead wrote:
       | It's nice to see innovation in optics yet I think it's more for
       | niche scenarios like sports
        
       | arghwhat wrote:
       | nit: All wide lenses also capture a "telephoto" image in the
       | center. The only thing a telephoto lens does is to spread that
       | image out over the whole sensor.
       | 
       | Maybe their lens is variable focal length, providing more
       | magnification in the center at a presumed cost of clarity.
        
         | mannykannot wrote:
         | I wonder whether that is entirely so (though the difference may
         | not be relevant for the intended applications of this lens.)
         | The reason I say this is that one can often tell whether a
         | telephoto lens is being used at long range, as opposed to a
         | normal lens at shorter distance, by the way the former seems to
         | compress the longitudinal axis (at least when one is watching a
         | movie rather than a still image.) This effect seems to me to be
         | independent of focal depth, in that it does not seem to depend
         | on having nearby or distant objects markedly out-of-focus,
         | though I may be mistaken about this. Do magnified long-distance
         | movies shot with normal focal-length lenses look like their
         | telephoto equivalents?
        
           | colanderman wrote:
           | Yes, mathematically telephoto is exactly just cropped wide
           | angle. The same light field is falling on the front element
           | of either lens so, modulo focus effects and distortion, it
           | could not be otherwise.
           | 
           | (Differing projections/distortions such as fisheye are
           | likewise exactly equivalent to mathematical transforms for
           | the same reason.)
           | 
           | You can see that this is true when using a (physical) zoom
           | lens. When zooming, there is no change in the projected image
           | of any sort other than it simply grows larger.
           | 
           | The effect you are referring to is complementary to parallax
           | and is likewise due exactly to physical proximity to or
           | distance from the subject being photographed. (Telephoto
           | lenses require the subject to be further away to remain in
           | the frame; in doing so, they move relatively closer to the
           | background, thus longitudinally compressing the scene.)
        
           | tonyarkles wrote:
           | This is definitely a real effect. I can't doodle this out
           | right now but if you grab a piece of paper and draw out the
           | triangles for a pinhole camera with different focal lengths
           | you can see how the angles (and horizontal separation at the
           | image plane) are quite different with different focal
           | lengths.
        
           | MetaWhirledPeas wrote:
           | > Do magnified long-distance movies shot with normal focal-
           | length lenses look like their telephoto equivalents?
           | 
           | Generally, I think the answer is yes. But the more
           | complicated answer is that every lens design has its own
           | flavor of distortion. An image from a lens optimized for
           | telephoto shots is going to have slightly different
           | characteristics than a cropped image from a lens optimized
           | for wide shots.
           | 
           | The compressed effect I _think_ you are referring to is
           | likely attributable to perspective, most noticeable when the
           | movie does one of those zoom shots where they keep the
           | subject at the same relative size in the frame while moving
           | the camera in or out. Like on this shot:
           | https://youtu.be/in_mAvHu9E4?t=19
        
           | pdpi wrote:
           | The source of that compression effect is the relative
           | distances top the sensor of different elements in the picture
           | - so a result of long vs short range, rather than long vs
           | short focal length. At a long range, your nose is 1% closer
           | to sensor than your eyes are. At a _really_ close range, it's
           | maybe 50% closer to the sensor than your eyes are. Given a
           | fixed range, though, you can achieve the same look by using a
           | long lens, or by cropping a shot taken with a shorter lens.
           | 
           | The thing is, distance and focal length aren't independent.
           | People don't usually shoot long lenses in close quarters, and
           | don't shoot distant subjects with wide angle lenses, we just
           | tend to fill our frames with the subject. That means the
           | compression effect is technically not related to focal
           | length, but in practice ends up showing up more when using
           | longer lenses.
        
       | SomeHacker44 wrote:
       | I am guessing it is some sort of anamorphic lens that has the
       | center portion telephoto and then smears out the wide angle
       | around the edges of the sensor. So you need a translation
       | algorithm to get back to a normal image.
        
       | diggernet wrote:
       | I imagine it taking pictures like this:
       | 
       | https://www.escherinhetpaleis.nl/escher-today/balcony/?lang=...
        
       | neallindsay wrote:
       | Like many here, my first thought was "telephoto is just a crop of
       | a wide-angle, so what are they bragging about?" Here's my
       | speculation:
       | 
       | Most lenses are "distortion corrected" because they assume they
       | will be displayed on a flat surface. A little explanation for
       | those not familiar: When you take a picture of a brick wall with
       | your camera parallel to the wall, notice that the bricks on the
       | edge of the photo are the same size as the ones in the center,
       | even though the edge bricks are further away from the camera.
       | This means more pixels are allocated per degree of view near the
       | edge of the field-of-view than at the center.
       | 
       | An "uncorrected" lens is basically what we would call a "fish-
       | eye" lens. Here (ideally) the same number of pixels are in a one-
       | degree circle in the center of the field-of-view as are in a one-
       | degree circle near the edge.
       | 
       | I don't think they would crow about just using an "uncorrected"
       | lens either, so I'm going to guess that this is a "reverse-
       | corrected" lens system where a one-degree circle in the center
       | gets _more_ pixels than it would at the edge. This would be the
       | obvious approach if they want a good center crop but want to
       | capture all the periphery as well.
        
         | 0_____0 wrote:
         | This sounds pretty elegant but I don't think it's correct.
         | 
         | I don't have a good sense for what the R&D required to spin up
         | a new bespoke sensor is, but I think it's sort of high - I
         | assume there's a reason Nikon seem to source most of their
         | sensors from Sony. Assuming you get to use a custom sensor for
         | your camera, you also lose a bit of sensitivity at the "fovea."
         | 
         | Also, Nikon's own press release also refer specifically to an
         | "optical lens system with both telephoto and wide-angle lens
         | functions," which leads me to believe this isn't an innovation
         | at the sensor level.
        
           | gavinsyancey wrote:
           | I don't think the parent comment is saying this is a new
           | sensor; rather that the lens spreads the center over a wider
           | area of the sensor (hence more pixels) and squeezes the edges
           | into a smaller area (hence less pixels).
        
           | neallindsay wrote:
           | I was talking about the lens achieving the difference in
           | image distribution across the sensor, not a sensor with non-
           | uniform pixels.
        
         | mortenjorck wrote:
         | Exactly; in other words, a telephoto may just be "a crop of a
         | wide-angle," but in a wide-angle lens, the area any given field
         | of view covers on a sensor will always be lower resolution than
         | than if it were spread across the entire sensor by a telephoto
         | lens.
         | 
         | Given the automotive context of this product, I would expect
         | the goal was to maximize the resolution for center FOV (more
         | clearly resolve objects further down the road) while
         | simultaneously maximizing the overall FOV angle (see closer
         | objects in peripheral vision).
         | 
         | In practice, the raw image probably looks like the old
         | Photoshop "bulge" filter, and then somewhere in the image
         | pipeline, it will get reprojected into a regular image that is
         | quite blurry at the edges and becomes increasingly high-res
         | toward the center.
         | 
         | (Another way of looking at this would be that this is an
         | optical adaptation to the uniform, cartesian nature of image
         | sensors, allowing a "foveated" image without a gradation in
         | sensor pixel density.)
        
       | DarkSucker wrote:
       | Neither the article nor official link gave much optical design
       | detail. Here's my guess. A (substantially) radially symmetric
       | system comprising a wide-angle positive-short-focal-length first
       | lens followed closely by a negative-short-focal-length whose
       | diameter is small (thus covering a small range of angles, in the
       | center field of view, from the first lens). A single central
       | sensor behind the negative element is for telephoto images, while
       | a collection of sensors distributed radially around the first
       | lens and off axis capture wide angle images.
       | 
       | Imagine a low-index ball lens in contact with a thin high-index
       | negative lens. That's the idea. I'm sure the real design uses
       | multiple surfaces/elements for each lens, and I'm sure it's
       | hyper-optimized. I'm interested to learn how close my guess is to
       | reality.
       | 
       | Apologies for the complex wording to describe geometry.
        
       ___________________________________________________________________
       (page generated 2024-12-30 23:01 UTC)