[HN Gopher] Film simulations from scratch using Python
___________________________________________________________________
Film simulations from scratch using Python
Author : lonesword
Score : 154 points
Date : 2021-04-28 09:17 UTC (13 hours ago)
(HTM) web link (kevinmartinjose.com)
(TXT) w3m dump (kevinmartinjose.com)
| steveBK123 wrote:
| Isn't the real crime here using Velvia to shoot street? :-) NPH
| would have been the choice if anything!
| rebuilder wrote:
| Back when Instamatic got popular, I was using an N900 and of
| course didn't have access to the app. So I made my own, by piping
| images over SSH to a server running a couple of Imagemagick
| scripts that applied one of a few LUTs I cooked up, and
| optionally some vignetting maybe.
|
| Worked OK, was completely pointless of course.
|
| Edit: I meant Hipstamatic. It's been a while.
| felixr wrote:
| LUTs are quite fun to play with. If you look for videos on "3D
| LUT Creator" you will find some cool things done using LUTs.
|
| If you are looking for a great and free tool to create LUTs, have
| a look at https://grossgrade.com/en/ It is not easy to find IMO;
| I knew existed because I had used it and it took me ages to find
| it again...
|
| Also, while I had no luck with 3D LUT Creator (trial) on wine,
| Grossgrade works fine :-)
| jbunc wrote:
| It's hard for me to see this as doing a good job of simulating
| film when it doesn't mention grains. Film has a granular
| structure where each "pixel" is a grain (crystal). Film is
| essentially already digital, but with a higher count of less
| regular pixels.
| kelsolaar wrote:
| The article title is a bit misleading, I was expecting actual
| simulation of film stock processing and rendering in Python. Here
| this is more about 3D LUTs usage, not much to do with film
| simulation itself.
| zokier wrote:
| Misnomer or not, 3D LUTs are what the industry means when they
| say film simulation, maybe with some grain effects sprinkled on
| top.
| Cullinet wrote:
| well if you're referring to 3D LUTs as the objective of the
| excercise, it's probably helpful to start with a LUT volume
| capable of doing realistic color transformations. 33 by 33 by
| 33 matrices are the standard used for broadcast color and
| it's only recently LUTS have been used for "color timing"
| features because the effectiveness and effective precision of
| a 3D LUT is extremely depending on your work flow and working
| color space and the details of your gamma and so on. OpenEXR
| is a floating point format for many reasons including
| handling of color.
|
| my point about the work flow involved is particularly
| relevant in consideration of emulating celluloid film stock
| because the bandwidth of film is concentrated in the high
| values of the channels. Silicon sensors struggle with
| highlights fundamentally even before the encoding. Photons
| hitting a piece of film are scattered throughout multiple
| layers (Fujifilm famously sold 4 layer negative film in
| retail volumes with enough success to turn Kodak, who held
| virtually every important patent for digital cameras, into a
| schizophrenic jelly mess.) and in comparison with the
| reflectivity of a silicon sensor, even allowing a layer of
| sophisticated precision lenses on top, channeling the light,
| film is incredibly accommodating to absorb excess light
| without the loss of detail. Sensors not only put lenses up
| front to corral the light rays but of course the next
| obstacles are the color matrix layers and IR/UV cut filters,
| and you can sometimes see online amateur photography forums
| panicking at the sight of a overloaded sensor flaring to show
| geometric patterns of diffraction caused by the sensor
| silicon itself. The route available for a photon meeting with
| a camera sensor is a utterly constrained path compared with
| the photon partying through the celluloid structure enjoyed
| by trendier EM radiation; this is why the BSI back side
| illumination tech reflecting photons back into the sensor
| wells has been so effective. The sad case Brit in me took
| some time to get used to Back Side Illuminated Sensors,
| meaning of course the sun shines out of the Sony high
| megapixel fanny...
| CarVac wrote:
| Only my photo editor Filmulator does that, to my knowledge.
| lonesword wrote:
| For what it's worth, I've updated the post and put a disclaimer
| up front:
|
| > Disclaimer: The post is more about understanding LUTs and
| HaldCLUTs and writing methods from scratch to apply these LUTs
| to an image rather than coming up with CLUTs themselves from
| scratch.
| splintercell wrote:
| Some feedback, the image with the caption: Left - the original
| image, right - the image after applying the 12-bit identity CLUT
|
| Looks the most convincing film like. The last sample (Fuji Velvia
| 50), absolutely does not look like Film at all (let along Velvia
| 50), main culprit is the shadows underneath the truck. I
| understand you're just applying RawTherapee's LUT there, but
| maybe you need to tweak the intensity down or play with the
| brightness.
| yesimahuman wrote:
| The Velvia simulation at the end is very good, nice work!
|
| As someone that still regularly shoots film and also owns a Fuji
| X Series camera, I don't find the film simulations that Fujifilm
| puts in the X models to be any good, so I feel like there is
| still a lot of worthwhile work to be done here.
| justtocomment wrote:
| I enjoy my Fuji X system camera and the colours it produces.
| Sometimes, however, I'd like to do some RAW processing in Linux
| (Darktable) - but of course this means that I lose in-camera
| film simulation.
|
| Since the camera can store the same photo in 2 different
| formats (RAW+JPEG), I'm wondering if it would be worthwhile to
| use a lot of these file pairs to try to get a LUT allowing to
| map Fuji RAW files to Fuji-Like JPEG results.
|
| Is there anybody knowledgeable here to tell me whether this
| approach is doomed from the start or if it could be promising?
| [deleted]
| xyzzy_plugh wrote:
| > I don't find the film simulations that Fujifilm puts in the X
| models to be any good
|
| I feel like you might be in the minority, I continually hear
| overwhelmingly positive remarks regarding Fuji's film sim. I'm
| not aware of anything comparable at that price point, and I
| have yet to see any non-pro film sim come close to the stock
| Fuji sims.
| CarVac wrote:
| The Fuji film sims are pleasing to the eye but by no means
| are they close to their namesake film stocks.
| yesimahuman wrote:
| Yea I think that's the primary issue to me. The Acros one
| is pretty good but the others leave a lot to be desired.
| The Velvia simulation is basically just a high saturation
| mode without the magic of Velvia. Classic Chrome doesn't
| look like any film stock I'm familiar with.
|
| Compare this to RNI film's preset pack for Lightroom which
| I find are much more interesting.
|
| I think a big reason they won't ever come close to true
| film is that proper film simulation is likely lossy
| (grain/etc) which no digital film simulation in a camera is
| going to risk.
| xyzzy_plugh wrote:
| > Compare this to RNI film's preset pack for Lightroom
| which I find are much more interesting.
|
| Sure but now you're paying a yearly subscription for
| Lightroom and buying third party presets. The Fuji
| profiles are free with the body and are pretty good for
| the vast majority of users.
|
| Besides if you're developing from raw anyways, you
| probably don't care about the Fuji sims in the least.
| It's apples to oranges. For more casual JPEG shooters
| they're easily accessible and look great.
| zokier wrote:
| Skips the interesting question of how the LUT tables are made,
| but still nice introduction to the topic.
|
| I guess to make film simulation, you could photograph bunch of
| color calibration targets (eg IT8) in different lighting
| conditions with both the film and digital sensor, and then try
| match them _somehow_. That is assuming the film is still
| available.
| camkerr wrote:
| Cinematographer Roger Deakins has a podcast and one of the
| episodes is with the colour scientist he works with, Joachim
| Zell.
|
| https://teamdeakins.libsyn.com/joachim-jz-zell-color-scienti...
|
| Lots of good info in that episode regarding LUTs, ACES and
| colour in film & tv.
| turnsout wrote:
| Film is definitely still available, and you can use a target to
| as a starting point for a LUT, though the workflow is not
| straightforward. The core of the problem is that you're only
| sampling a fraction of the number of colors in the LUT, so to
| derive the rest of the LUT entries, you need to interpolate and
| extrapolate.
|
| The trick is which algorithm you use to take the sparse 3D mesh
| of the calibration target and warp/interpolate the rest of the
| values. Trilinear would be the most naive (and lowest quality)
| approach.
|
| There's a ton more detail about how to actually match digital
| to film in Steve Yedlin's blog [1], including a cool video of
| sparse color interpolation in 3D (toward the bottom of the
| page).
|
| [1] http://www.yedlin.net/NerdyFilmTechStuff/index.html
|
| [2] http://www.yedlin.net/OnColorScience/index.html
| hatsunearu wrote:
| The thing about LUTs is that it's only mathematically valid if
| you know the _type_ of data the "input" is--and I'm not just
| talking about oh is it an 8 bit image or a 12 bit image.
|
| It needs to be aware of the color space and EOTF (an extended
| idea of "gamma")--which is why LUTs are only used in very
| controlled scenarios (e.g. for videography, the input color
| settings are fully detailed, for example Sony's slog, so the LUT
| is a reproducible, mathematically sound operation)
|
| "RAW" photos from cameras are what we call linear color space,
| where the RGB values correspond linearly to the amount of light
| received by each photosite. If you try to use a LUT designed for
| RAW on an sRGB JPEG image, you're gonna have some problems, at
| least without screwing with the color space.
|
| It's why I kind of gave up on trying to use LUTs in photo
| editing, it's just so unreliable.
| rubatuga wrote:
| I actually made a command line program in Python that lets you
| apply CUBE LUTs to any image of your choice:
|
| https://github.com/yoonsikp/pycubelut
|
| I'm trying to add a GPU acceleration feature using wgpu-py, but
| it was unfortunately too buggy last time I tried in January
| app4soft wrote:
| > _Right - the original image, Left - the image with..._
|
| Few issues on this site:
|
| 1) "Right" and "Left" are wrongly mapped (i.e. "Left" -- is the
| original image in all cases).
|
| 2) On 1280px width screen pair images shown in vertical order,
| instead of horizontal.
| lonesword wrote:
| 1) Fixed. Thanks for pointing this out.
|
| 2) The image comparison thing is a wordpress feature. Not sure
| if there's anything I can do to fix this.
| rasz wrote:
| 3) In Chrome Left and Right image sizes are misaligned by
| couple of pixels (scaling issues?). I experimented a little and
| div.jx-image.jx-right img { right: 0; }
|
| seems to be the culprit div.jx-image.jx-right
| img { right: unset !important; }
|
| fixes it.
| uyt wrote:
| The title reminds me of a past submission which is about the use
| of python in the film industry:
| https://news.ycombinator.com/item?id=24826873
| carlob wrote:
| Completely OT: in case anyone is wondering that's not what
| garbage trucks look like in Rome, that thing is someone Ape [0]
| filled with garbage.
|
| [0] https://en.wikipedia.org/wiki/Piaggio_Ape
|
| (ape = bee, vespa = wasp: one is for work, the other for leisure,
| but same company)
| lonesword wrote:
| Yup, that was sloppy phrasing on my part - udpated the image
| caption. Thanks.
| sldksk wrote:
| Pretty good, but that Velvia imitation looks a stop dark.
| Cullinet wrote:
| you'd expect to be a stop down because there's so little
| information to work on in the highest bits of digital (I don't
| remember gamma being applied which is instrumental to handle
| the log / lin perception / recording mismatch. But there's
| simply much less data in the highest ranges of digital unless
| you deliberately go about getting additional information. (ETTR
| expose to the right was the earliest widely adopted technique.
| the group of photographers I used to follow closely took
| extensive readings for the precise sensitivity of the RGGB
| channels to enable the maximum information capture using tuned
| filtration and custom raw file converters. When the big camera
| companies hit the next limit with specifications, I am hoping
| that they will finally address this capture optimisation issue
| with at least providing better information and interfaces for
| developers and expert users who program.
|
| Velvia was a very important product...
|
| Velvia was launched by Fujifilm guerilla marketing the Los
| Angeles Olympic Games, for those who remember that Kodak was a
| huge official sponsor and the invocation by Kodak of what my
| world in design and publishing (and software for the same) felt
| was a terrible misread of public sentiment and the unmistakable
| arrogance that quickly dismantled Kodak commercially
| thereafter...
|
| how we see the world is a lot more important to people than any
| research or surveys could establish..
|
| I have been mightily impressed with the latest Fujifilm film
| stock emulations on the GFX 100s model just out recently. (this
| 102MP "medium format" camera which is a normal size of a larger
| SLR film camera body and the simultaneously launched 85mm f/1.7
| lens is a combination of image capture capabilities I think
| many hners would be interested in if they could get a hands on
| experience with one. Optical design is hitting diffraction
| limits so quickly that the best new lenses often don't become
| any sharper stopped down to smaller apertures than the widest
| open diaphragm. f/2.0 is becoming the sharpest aperture.
| Historically it was f/8 or very occasionally f/5.6 capable of
| the sharpest picture. For non photographers, Fujifilm makes or
| made the Hassleblad cameras and lenses since the H series of
| auto focus models, and are considered as possibly the best cine
| lens manufacturer if you are simply seeking a perfection of
| sheer resolution. 30 years ago the longest usenet thread on the
| medium format digest entitled "breaking the 50lppmm barrier"
| ran to 200 printed pages (yes it was worth printing this in
| entirety!) and concluded via countless means and calculations
| that 50 pairs of separated lines visibly resolved by a lens at
| one meter from the test chart was as good as it gets. Today 200
| line pairs per millimetre is increasingly common. The human eye
| with average 20:20 vision resolves 8lppmm at 1m. I'm currently
| evaluating purchasing a Fujifilm lens capable of projecting the
| similar resolution to 200lppmm on the sensor right through its
| zoom range. this is completely phenomenal. Directors of
| photography have been deploying all manner of tricks to soften
| the image of actors faces eg using special diffusion for only
| wavelengths reflected by human skin. I'm convinced my iPhone is
| playing with subsurface scattering bursting fill flash light
| somehow in portrait mode.
|
| there's a good run down of the Fujifilm film stock simulations
| their digital cameras can perform here
| :https://www.bhphotovideo.com/explora/photography/tips-and-
| so...
|
| the whole thing with film is the 3D grain structure involved.
| Technicolour is /was "only" a halftone matrix of transferred
| organic dyes in the final printing of the projection positive.
|
| at the end of the 90s and before Kodak finally expired
| commercially Fujifilm was pressing ahead non stop developing
| increasingly complex multiple layer structures of photo film
| including whole additional film layers that were sensitive not
| only separating barriers. I remember being truly excited for
| what was going to happen in photo film technology until as late
| as 2002. I (my company) owned a Heidelberg Tango drum scanner
| until 2007. I'm not sure if you can seriously use anything else
| now for archival quality scanning, although I may be involved
| in acquiring the Black Magic Design 35mm cine film scanner in
| the not distant future, but the resolution and color depths
| aren't improved since the days when the Tango was a halo
| acquisition by my business. In theory a lot could be improved
| that's for sure. the 50lppmm limit opined by usenet would put
| the limit of a 35mm film image to be 25 megapixels if I'm not
| wrong. 24MP seems to be a very happy number for 90 percent of
| professional work today for print. Cinema can be different,
| because our motion perception of resolution isn't much explored
| and the low light output of projecting gives 1e-6 less
| different colours (though because of Macadam Elipses a few of
| those don't matter :
| https://en.m.wikipedia.org/wiki/MacAdam_ellipse
| ipsum2 wrote:
| Cullinet - if you had a blog talking about the technical
| aspects of photography/film, I would love to read it.
___________________________________________________________________
(page generated 2021-04-28 23:01 UTC)