[HN Gopher] Using a laser to blast away a Bayer filter array fro...
___________________________________________________________________
Using a laser to blast away a Bayer filter array from a CCD
Author : _Microft
Score : 105 points
Date : 2021-08-18 09:18 UTC (13 hours ago)
(HTM) web link (hackaday.com)
(TXT) w3m dump (hackaday.com)
| ruined wrote:
| here's a direct link to the original youtube video explaining and
| demonstrating the process and results
|
| https://youtu.be/y39UKU7niRE
|
| this is very exciting and i wonder if a similar process could be
| applied to consumer DSLR/MILC cameras. would love to shoot some
| high quality video in uv/ir
| opencl wrote:
| It's certainly possible, there's already a company[1] that
| sells cameras with this modification performed. And a few
| cameras that come from the factory with no Bayer filter, like
| the Leica Monochrom, but all the ones I know of are very
| expensive.
|
| [1] https://maxmax.com/shopper/category/9241-monochrome-cameras
| showerst wrote:
| If you're into lasers at all, les' lab is a great channel.
|
| It's really an example of the best part of youtube, just a dude
| who knows some stuff explaining how things work and showing off
| shop-made projects.
| barbazoo wrote:
| Wow, that's interesting. I didn't know that's how CCDs worked. If
| I understand correctly, 1/3 of the "pixels" captured the red, 1/3
| green, 1/3 blue. Does that mean the sensor now has 3x the
| resolution it had before?
| AceJohnny2 wrote:
| Bayer filters are 50% green, 25% red, 25% blue for consumer
| devices.
|
| The reason is that green actually captures much more of the
| luminance information, and our eyes have a much better
| luminance resolution than color resolution.
|
| Tangentially, it's why the so-called YUV 420 (chroma
| subsampling) is so effective, where it's effectively encoding Y
| (luminance) data for every pixel (in a block of 4), but U/V
| (chrominance) only for every pair (or quad, someone correct me)
| of pixels.
|
| There are examples online of pictures [1] with their luminance
| resolution decreased: you can immediately see the pixelation,
| and of their chrominance resolution decreased: you can barely
| tell the difference.
|
| [1]
| https://en.wikipedia.org/wiki/Chroma_subsampling#/media/File...
| AceJohnny2 wrote:
| Extra fun fact: Bayer filters were developed by Bryce Bayer
| at Eastman Kodak, who first researched digital sensors.
|
| Despite such a head start, Kodak went on to completely fail
| the analog-to-digital camera transition. A prime example of
| the Disruption Dilemma.
| colonwqbang wrote:
| YUV/YCbCr 420 means that there is one set of chroma samples
| (Cb+Cr) for each 2x2 block of luma samples (pixels).
|
| Often, the chroma samples fall on pixels in even rows and on
| even lines. So pixels in odd rows or on odd lines, have to
| borrow (interpolate) their chroma values from neighbouring
| pixels.
| barbegal wrote:
| The reality is that a lot of sensors provide greater resolution
| than the lens can resolve so the actual spatial resolution
| barely changes.
| _Microft wrote:
| Often it is even 50% green, 25% red and 25% blue pixels. There
| are different patterns, though. What you get as "megapixel"
| number of cameras counts subpixels individually, that is a
| "10MP" labelled camera does not have 10 million pixels of each
| color but 10 million subpixels in total.
|
| https://en.wikipedia.org/wiki/Bayer_filter
|
| If you have a device that can output RAWs, you can look at a
| RAW image using the FOSS photo development program "Darktable".
| Choose "photosite color" as the "demosaic" filter to show the
| individual color channel values (and thereby the Bayer pattern
| of your camera).
|
| But yes, after removing the filter, you have three times the
| number of pixels but you lost color information.
| falcrist wrote:
| Since the "pixels" are either formed at every intersection of
| 4 photosites (overlapping each other) or by interpolating
| data for each color to include the "missing" photosites
| (which is effectively the same), the megapixel count should
| fairly accurately represent both the number of photosites and
| the number of pixels in the output image.
|
| I'm not exactly sure how the edge pixels are treated, but the
| difference in number between pixels and photosites should be
| on the order of a few thousand at most.
| ansible wrote:
| > _I 'm not exactly sure how the edge pixels are treated
| ..._
|
| It is quite common to have more than the "nominal" number
| of pixels in a sensor array. So there are extra pixels for
| the edges.
| falcrist wrote:
| Ah yes. I had forgotten about this. I believe there are
| also extra pixels at the edge of some sensors that are
| unexposed, and just used for calibration purposes.
| [deleted]
| barbazoo wrote:
| Are you saying that one photosite might be included in more
| than one pixel and therefore the overall pixel count is
| roughly equal to the number of photosites?
| falcrist wrote:
| I'm saying that each photosite definitely _is_ included
| in more than one output pixel, and I 'm also saying that
| the number of output pixels should be about the same as
| the number of photosites.
|
| This is obviously capturing less information than if you
| had a completely separate set of photosites for each
| pixel, but the megapixel count of cameras is nevertheless
| accurate.
|
| Modern cameras sometimes come with a "pixel shift"
| function, which uses the image stabilization system to
| take 4 images each shifted one photosite from the others
| to construct an image where each pixel contains the
| information of 4 independent photosites with no sharing
| between the pixels.
|
| The resolution of the final image is the same as a normal
| image, but the result is much clearer, and far less
| likely to suffer from blue/red moire.
| dr_zoidberg wrote:
| Demosaicing algorithms are _very good_ at restoring the
| resolution "lost" to the BFA. They can introduce some
| artifacts (zipper effect, "labyrinth", fringe color, to name
| a few) but in general, sharpness isn't lost as much as people
| imagine.
|
| Nowadays they classical algorithms are being replaced by
| convnets that are trained on different BFA/image pairs and
| can get very good results -- at the cost of placing a convnet
| in the middle (so much higher computational cost, which can
| be offloaded to a GPU/AI accelerator if available).
|
| If you want to see what a "pixel perfect" camera gives you,
| there are the Sigma cameras with Foveon sensors[0] or you can
| check the cameras that have a sensor-shift superresolution
| approach (some pro Olympus and Hasselblad models have this
| feature). Sensor-shift SR has the problem that it works best
| on static scenes, because it takes several images which are
| then later combined on a single picture, and if there's
| movement between the images it may introduce a few artifacts.
|
| [0] which do full color data for every pixel, as they use
| silicon depth to filter wavelenth
| passivate wrote:
| Foveon sounds great in theory, but it doesn't deliver IMHO.
| It can achieve parity in terms of pixel level sharpness and
| color at the lower ISOs, but picture quality breaks down
| very quickly even at moderate ISOs.
|
| https://tinyurl.com/yyrndzkk
|
| https://tinyurl.com/t9nnadc
| gsich wrote:
| OT: why URL shortener? This is not twitter with a length
| restriction.
| passivate wrote:
| I've ran into issues with forum s/w URL sanitizers
| mangling URLs from DPReview. Maybe I should have just
| tested it. Here we go!
|
| https://www.dpreview.com/reviews/image-
| comparison/fullscreen...
| gsich wrote:
| Thanks!
| karmakaze wrote:
| I always wondered if this was a good ratio. I get that green
| usually has the strongest signal and thus better low-light
| performance. For bright shots, I find that preserving higher
| resolution in blue results in higher perceptual resolution of
| the final image. You can simulate something like it by using
| an extreme 'night mode' more-red/no-blue display mode and
| watching a 4k video.
| dr_zoidberg wrote:
| Green was chosen because it's to what the human eye is most
| sensitive. Look at Fuji's X-trans[0], and there are also
| RGBW arrays that prioritize dynamic range.
|
| All in all, the BFA is "good enough" most of the times. For
| the use cases where it isn't, you're either:
|
| * Budget constrained and can't really afford not using BFA
|
| * Able to (pay for and) use either a color wheel in front
| of your sensor, or go with prism + triple sensor.
|
| * Bite the bullet and go with a "strange" color array.
| You'll probably need to work on the software side for
| demosaicing to get proper support and fix eventual
| artifacts.
|
| [0] even more green! 20/36 photosites are green, 8 red 8
| blue
|
| [1] with W being white, meaning no color filter or
| "panchromatic cell". In theory this helps on dim light
| conditions.
| ipsum2 wrote:
| Not specific to CCDs, CMOS sensors also have Bayer filters.
| Actually, fancy cameras with CCDs skip bayer filters all
| together by using prisms to split light:
| https://en.wikipedia.org/wiki/Three-CCD_camera
| pjc50 wrote:
| It always had the same resolution, it's just that beforehand
| you had to process it down by 3x to get a colour image. What it
| has now is more _range_ especially in the non visible range.
| zokier wrote:
| While neat technique, you could just buy monochrome camera also.
| Astrophotography community in particular seems to like them so
| that might be good keyword to search for.
| HeavenFox wrote:
| True, astrophotographers like monochrome camera because you can
| prioritize gathering brightness signal over color signal, so
| you get more detailed photo; you can also use narrowband filter
| and image under full moon or in inner cities.
|
| However, astrophotographers also complain about the price
| premium of monochrome camera. Given the same sensor, the
| monochrome version is typically 20% - 30% more expensive than
| the color version, which is counterintuitive - you don't need
| to put the Bayer filter on! So if we can perfect the technique
| to debayer color sensor, the astrophotographer community would
| be elated.
| [deleted]
| 2bitencryption wrote:
| > because you can prioritize gathering brightness signal over
| color signal, so you get more detailed photo
|
| I wonder how long until phone cameras are purely monochrome,
| and apply ML to add the "correct" color in post-processing.
|
| Actually, wasn't there some phone a few years ago with one
| high-res black-and-white sensor and one low-res color sensor,
| and it combined them through some tricky to produce a sharp
| color image?
| ansible wrote:
| > _the monochrome version is typically 20% - 30% more
| expensive than the color version, which is
| counterintuitive..._
|
| The market for monochrome sensors is very tiny compared to
| the rest of the commercial products. Every phone now has 2 or
| more cameras on it, and there are billions of those.
|
| Any changes to the manufacturing steps means more setup and
| effort. Different test procedures, quality control,
| documentation, etc.. That is all overhead, to be absorbed by
| a relatively small production volume.
|
| I'm surprised it is only a 30% premium, I'd have expected
| higher actually.
| spiantino wrote:
| I'm an avid astrophotographer, and the prices for cooled
| mono and cooled color cameras are the same. If you compare
| a dedicated, cooled astro camera to a consumer DSLR then
| yes, they are more expensive. But apples to apples they are
| exactly the same price. Actually, looking live the mono
| version is a bit cheaper:
|
| https://optcorp.com/products/zwo-asi6200mc-p - color $3999
|
| https://optcorp.com/products/zwo-asi6200mm-p - mono $3799
___________________________________________________________________
(page generated 2021-08-18 23:01 UTC)