[HN Gopher] iPhone camera app replaces person's head with a leaf...
___________________________________________________________________
iPhone camera app replaces person's head with a leaf in photo
Author : davidbarker
Score : 705 points
Date : 2021-12-30 18:00 UTC (1 days ago)
(HTM) web link (twitter.com)
(TXT) w3m dump (twitter.com)
| mark-r wrote:
| The leaf over the face is much larger than the leaves in the
| background. That leads me to believe it was an actual leaf in the
| foreground hanging in an inconvenient place.
| matsemann wrote:
| I hate how much phones lately alters the images. Of course it
| most of the time makes the images look better, and cameras are a
| big selling point on a phone.
|
| But I don't like how my photos of people suddenly have a filter
| applied to the faces, how a picture of leaves during fall have
| vibrance exaggerated, how the sky looks clearer than it really
| did.
| mgraczyk wrote:
| On Android you can enable a "raw" mode that will capture a dng
| file with much less processing. You can then adjust that raw
| file and render it to a jpeg according to your taste.
|
| The dng will still have things like stabilization and multi-
| frame merging, but without those the image will almost always
| look horrible as others have explained.
| ehsankia wrote:
| RAW is not quite what they want (I think). It's HDR+ that
| they need to disable. The latter takes multiple frames at
| different exposures and smartly merged them into a single
| photo, then applies extra color corrections on top. Taking
| RAW skips the last step, but you still get a merging of
| multiple frames. Disabling HDR+ will, I believe, take a
| single frame. Of course on a small phone sensor, the quality
| will be quite bad.
| mgraczyk wrote:
| Yeah I work on HDR+. Depends on the phone though, only
| certain phones run full HDR+, but they all do some sort of
| multi frame processing.
|
| If you were able to disable HDR+ and get a single captured
| frame, it would look horrible. You'd have to merge multiple
| frames yourself to get something decent. You can Google
| around for ways to do that but it will only work on a few
| phones.
| kuschku wrote:
| The way HDR+ merges stuff (which looks awesome, but
| something is just _off_ , especially in darker
| environments) is actually why I bought a Sony a6300. Used
| it was cheaper with a good lens than a Google Pixel would
| have been (my Pixel 1 stopped getting updates after all)
| and the photos it takes are incredible.
| pmontra wrote:
| How can we enable raw mode? I'll looked into the settings of
| Camera but I found only a way to disable the scent optimizer.
| justsomehnguy wrote:
| https://f-droid.org/en/packages/net.sourceforge.opencamera/
|
| You need to change API to Camera2 and then you will be able
| to enable RAW.
| roywiggins wrote:
| The Samsung photo app offers to store RAW files (when
| shooting in Pro mode) as a setting. YMMV on others.
| bigyikes wrote:
| I use a (paid) app called Halide, but there are others out
| there too. I'm personally not aware of a way to do it with
| the native Camera app, but it would be cool if it exists.
| justusthane wrote:
| Newer iPhones can also shoot in raw.
| pcurve wrote:
| Half the time it's the camera, the other half, it's the display
| setting. Samsung phone has different modes for their AMOLED
| screens.
| Jackson__ wrote:
| And furthermore, how it manages to make it all look like a
| painting when zoomed in. No matter if Android or iPhone, old or
| new, they all seem to have this really annoying effect.
| akomtu wrote:
| Big tech has this uni-modal aporoach to users: they find what
| maximizes a metric, and works for 75% of the users, but roll it
| out to 100%. Dealing with the remaining 25% would have low ROI.
| amelius wrote:
| Not only tech. Try finding shoes or pants when your size is
| 1.5 times sigma away from the mean.
| iamacyborg wrote:
| As a 5'5 dude buying clothes from American brands is often
| hilarious. I bought an xs nike trail running tshirt and
| it's a good 5 inches too long in the body.
| Skunkleton wrote:
| I'm a 6'6" dude and I feel your pain. Every time I get
| company swag, the best they can offer me is a super wide
| belly shirt :(
| GuB-42 wrote:
| Small people can use child size, which is usually much
| cheaper for the same thing.
|
| Large people are out of luck.
|
| But your point still holds, it is just that if you include
| children, small sizes don't follow a normal distribution,
| large sizes do.
| hetspookjee wrote:
| I never understood this one size fits all approach of a lot
| of companies. Controlled opposition would often result in
| higher market penetration and more net happiness
| dylan604 wrote:
| Mr Smarty Pants has the solution for all cameras to be 100%
| accurate 100% of the time pleasing 100% of the users. How
| humble of you to chat with us here on HN. /s
|
| You sound like you have not dealt with humans at the scale
| of the number of smart phone users. I've never dealt with
| the numbers of something like an iPhone, but over the
| course of my career have had multiple SKUs totaling over 1
| million units. No matter what, there are always "people on
| the internet" that are dissatisfied enough to go online to
| voice their opinions. That's the only 100% is that there
| will always be someone unhappy for whatever reason(s).
| severine wrote:
| I don't know if digital camera emulation is a thing, but this
| thread is making me thirsty for a way to emulate my old 2MP
| Olympus.
| cfn wrote:
| I have some software that has "digital film" emulations (DXO
| FP), I suppose that is what they mean, emulating old cameras.
| davidcollantes wrote:
| Altering photographic images has always existed. 35 years ago I
| developed my own films, and took artistic liberty on each photo
| I printed on photographic paper. I also used a sepia process,
| and my sister would colour the photos afterwards (we didn't
| have colour film, but black and white).
| tomasyany wrote:
| Very much agree. I took RAW pictures with my Nikon on a
| christmas party, and even took the time to properly develop and
| adjust them.
|
| Still, people complained on how "old and bad" their faces
| looked (in pretty normal pics, nothing fancy). I attribute this
| to the fact that everybody is now used to phones completely
| editing faces and smoothing skin and adding saturation, etc.,
| which makes us more "instagramable" although less human.
| ryandvm wrote:
| Just wait until the phones are automatically giving people
| cartoon character eyes...
| kingcharles wrote:
| You mean the cartoon filter that has been around since
| forever?
|
| https://www.youtube.com/watch?v=QuRNiyjVpEM
| johnisgood wrote:
| They already have been giving them a dog tongue and
| whatnot.
| mark-r wrote:
| I worked on Paint Shop Pro a long time ago, when they first
| added red eye correction. It did it not by manipulating the
| image, but by painting an artificial eye over the original.
| The dialog was hellishly complex since you had to specify
| not only how the new eye should look but how to make it
| match the original. But it delivered impressive results.
| Last I saw, it was scheduled to be replaced with something
| simpler and more traditional in the next version.
| teawrecks wrote:
| Wow! That seems like a WAY harder task.
| nitrogen wrote:
| It's already happening. I saw an extreme eye enlarging
| filter accidentally applied to a video by Matt Risinger
| (YouTube videos about construction and homebuilding).
| jjeaff wrote:
| I think the iPhone is doing this with the portrait mode. It
| may just be the smoothing or a bit of lens fish eye
| creating the illusion, but I swear that all the portrait
| photos of me and my family have slightly larger eyes than
| reality.
| Zak wrote:
| This is a feature of Snapchat, to give one example.
|
| https://www.google.com/search?q=snapchat+cartoon+eyes&tbm=i
| s...
| annexrichmond wrote:
| Yeah I find that the front facing camera on the iPhone is
| notoriously bad. The pictures it takes don't look like me
| because it alters the skin tone and does aggressive smoothing.
| I hate it.
| shp0ngle wrote:
| It's market unfortunately.
|
| People want it, and they want their faces to look better than
| in reality, to post on Instagrams.
|
| The phone that has better pictures got better sales.
| barrkel wrote:
| I'm finding it increasingly hard to take photographs of unusual
| light in the sky, because the phone camera keeps trying to
| normalize it to something it's seen before (i.e. something
| which matches its tuning and machine learning models), whether
| it's auto white balance or other computational trickery.
| formerly_proven wrote:
| Sony used to have that problem with their real cameras a few
| years ago. Earned them the nickname Stareater iirc.
| mark-r wrote:
| The stareater feature was just noise reduction taken too
| far. A star looks just like a noisy pixel.
| Derbasti wrote:
| A while ago I was trying to compare a few of my cameras. It was
| a cloudy day in early fall. All three cameras and my eyes
| agreed that the sky was dull-grey and the leaves on the trees
| drab-brown.
|
| To the iPhone, however, the sky was more blue than grey, and
| the leaves had an autumn-orange hue instead of brown. It wasn't
| a complete fake, and if I had only had the iPhone picture I
| wouldn't have noticed. But it was just a little bit better-
| than-real, and in direct comparison obviously incorrect.
|
| This, more than anything, discounted the iPhone camera for my
| uses.
| gwillen wrote:
| This kind of white-balance fuckery became very apparent last
| year, when the skies in the bay area turned orange and red
| from smoke. On many phone cameras, the photos were
| automatically color "corrected" to be completely wrong, with
| no option to avoid this.
| sethammons wrote:
| I have yet to capture the oranges, pinks, and violets of
| our amazing sunsets on a phone camera. It is always more
| mundane and the vibrance gone.
| naz wrote:
| This is so common in consumer tech. Is there a name for it?
| Like how any new TV has horrible motion interpolation and
| sharpening enabled by default, or the bassiness of Bose/Beats
| headphones.
| bcrosby95 wrote:
| It's common in pretty much anything targeted towards the
| masses.
|
| It's probably best to think about it in terms of food. Your
| average person has an "unrefined" palate. Be it for food,
| drink, art, etc.
|
| I think everyone has one of these in some areas of life. You
| can't be a connoisseur in every field - it takes too much
| energy.
| laumars wrote:
| I've never owned Bose nor Beats specifically but more
| generally I find bassiness is a desirable feature rather than
| a gimmick for dumb consumers.
|
| With room sized speakers it's not a problem because you'll
| have multiple cones dedicated to the low end and usually some
| subs too. Thus it's easy to have a rich low end without
| sacrificing the fidelity of the higher end. But with
| headphones that's _much_ harder to pull off. So you either
| have a flatter sound or a muffled high end. Thus having
| headphones that can have a super crisp top end while still
| still producing a rich and deep low end is very much
| desirable.
| Benjamin_Dobell wrote:
| > _I find bassiness is a desirable feature rather than a
| gimmick for dumb consumers_
|
| It's perfectly reasonable to find bass a desirable quality.
| Depending on my mood I'll listen to music with lots of
| bass, or with little bass. However, I've zero desire to
| intentionally alter the frequency response so I'm hearing
| something different than the musicians and mixing engineer
| intended. Instead I'll just listen to appropriate music for
| my mood/taste.
|
| Intentionally having a non-flat frequency response is
| equivalent to adjusting the colour space / colour grading
| of your monitor to not accurately represent colours. You
| _can_ do it, and there are reasons why you might want to do
| it temporarily e.g. blue light filtering in the evening.
| However, doing so permanently without a specific (medical?)
| reason is a bit unusual.
| laumars wrote:
| > However, I've zero desire to intentionally alter the
| frequency response so I'm hearing something different
| than the musicians and mixing engineer intended.
|
| I've done a lot of research on this as a recording artist
| myself and what you're saying here is a misunderstood
| meme.
|
| Eg Half the records released before 80s have been
| remastered to sound different to what the musicians
| originally recorded.
|
| Plus any medium adds colour, vinyl adds warmth to the
| playback, digital formats (unless you're using lossless,
| which most people don't) add artifecting, etc. Songs are
| often written for their preferred medium.
|
| So there isn't really an exact "as intended" but rather a
| broader "Goldilocks zone" (for want a better term). This
| is especially true if you listen to a broad variety of
| genres.
|
| You'll also find that most songs record in the last 20
| years will be compressed to hell and back so they sound
| good regardless of how shitty the sound systems are in
| peoples homes and cars. This isn't an artistic decision,
| it's what producers and sound engineers do to make
| records sound good for the lowest common denominator.
| It's also part of the reason why live music sound better
| (if the gig or club has a half decent sound engineer
| anyway).
|
| > Intentionally having a non-flat frequency response is
| equivalent to adjusting the colour space / colour grading
| of your monitor to not accurately represent colours.
|
| Some content is actually deficient in some spectrums due
| to the limitations of the media or technologies of the
| era. Those limitations were intended to be compensated by
| speakers that added that colour. There's a reason why
| studio monitors with zero frequency curve are less common
| to for rock fans than acoustic speakers (for example).
|
| Lastly it's also worth noting that not everyone's ears
| hear spectrums equally. Our ears don't have a zero
| frequency curve and that curve will differ from person to
| person. Which is why some of the best headphones out
| there are ones that profile your hearing and then perform
| post processing on the music based on your hearing
| profile.
| kuschku wrote:
| And that's why you buy the CD release of Peter Gabriel
| albums and listen to those instead ;)
| laumars wrote:
| Already discussed that point: those have been remastered
| for CD and thus sound different to the original recorded
| versions.
|
| If you're a purist like the GP then you wouldn't listen
| to the CD versions. Of course, in practice most people
| are not that much of a purist. Which is why the whole
| meme of "as the artists intended" is largely hypocritical
| posturing.
| ShroudedNight wrote:
| If I hadn't been gifted a pair of beats earbuds, I could
| see myself believing similarly. The ones I was given were
| very nicely built, with tactile components that felt of
| significant quality, as though they were assembled with
| great care. They were also the muddiest, mushiest, and most
| unpleasant listening experience I've had in the last couple
| of decades outside of bad laptop / phone speakers or
| scenarios that used explicitly damaged components. When I
| first got them I thought that I had received a bad pair,
| only to find online that the sound profile was intentional.
|
| They were awful.
| laumars wrote:
| Obviously shit earphones are going to sound shit. That's
| true whether they're bass heavy or not. So its a
| sentiment that is not contradictory to my point.
|
| My point is having earphones and headphones that can
| offer a deep and rich low end _without_ sacrificing
| sharpness a not novelty feature. And there are earphones
| and headphones out there that can do that. I know this
| because I've owned plenty over the years. :)
| Zircom wrote:
| I hate motion interpolation with a burning passion and have
| made it into practically a vendetta and will turn it off
| anywhere I see it by any means necessary, including
| downloading a remote application onto my phone and using the
| IR blaster to turn it off in restaurants and waking up in the
| middle of the night at friends houses to sneakily switch it
| off.
| tartoran wrote:
| For whoever what motion interpolation is also known as is
| the soap opera effect on movies and I agree, it looks
| terrible but most people don't get it, it doesn't bother
| them at all.
| kuschku wrote:
| For me personally, 24fps is extremely close to being
| unable to perceive motion, and in many movies I
| absolutely can't see the content.
|
| Any pan in a movie is something where my mind absolutely
| is unable to process the motion and I become unable to
| see anything at all. With motion interpolation on, I can
| actually tell what's happening in an action scene.
| copperx wrote:
| According to some neuroscience article that was posted in
| HN recently, some people might percieve reality in "less
| FPS." Not only that, but as people age, the speed also
| goes down. Most people I know cannot discern the
| difference between heavy motion interpolation and it
| being off. In the same way, I remember when people
| weren't able to discern between DVD and BluRay quality in
| 1080p displays. Even today, many people can't see the
| difference between a Retina display and a 1080p monitor,
| which blows my mind.
| vbezhenar wrote:
| I did blind testing on myself and found out that I
| couldn't see a difference between 72 and 144 FPS.
| jiggawatts wrote:
| Taste.
|
| It takes time and experience to develop, and the masses on
| average don't have it. As in, they might have developed taste
| for a few products, but not _most_ products. Hence, the mass-
| market products are aimed at people with no taste, because
| that captures the largest slice of the consumers.
|
| Random examples:
|
| - In A/B tests, the typical personal will rate louder music
| as better. Hence, all bars and pubs turn their music up to
| 11, to the point that it's horrendously distorted, causes
| physical pain, and forces everyone to scream at the top of
| their lungs to be heard.
|
| - Sugary, salty and fatty foods are consistently rated by
| typical people as more tasty than foods without them. Hence,
| all fast-food restaurants load their foods up with those
| elements instead of more expensive flavourings such as herbs
| and spices.
|
| - Just look at the typical gaming PC market. RGB LEDs are now
| almost "essential", despite adding nothing material to the
| performance or capability of the system other than a garish
| blinken-light-show. You can't _see_ the gigahertz, but you
| sure can see the LEDs!
|
| - Cars are perceived to be more sporty if they have a loud
| exhaust with a deep note to it. So of course, every "sports"
| car has literal fake exhaust that's "tuned" to make this
| particular noise.
|
| Etc, etc...
|
| It's all down to bad taste.
| mark-r wrote:
| It's even worse with Harley-Davidson motorcycles. They're
| not just going for low and loud, they have a specific
| profile that they tune their engines for. It will be
| interesting to see what they do if they ever make an
| electric.
| jazzyjackson wrote:
| Harley-Davidson does make electrics, they're very
| expensive, which I guess is another way to be loud.
| ramesh31 wrote:
| > Sugary, salty and fatty foods are consistently rated by
| typical people as more tasty than foods without them.
| Hence, all fast-food restaurants load their foods up with
| those elements instead of more expensive flavourings such
| as herbs and spices.
|
| I love this one. Want to convince someone with an
| unsophisticated palette that you are the greatest chef in
| history? Just start loading everything you make with butter
| and sugar. Salty and sweet === good to most people.
| twofornone wrote:
| You can buy MSG on amazon and instantly make anything
| taste great.
| thaumasiotes wrote:
| You can buy MSG in any store. Why Amazon?
|
| It doesn't have much effect. I make fried rice with and
| without it and can't really tell the difference.
| anonymouse008 wrote:
| Hearing this and Steve Jobs' critique of Microsoft, "they
| have no taste" is stunning.
|
| Taking the iPhone as the mass consumer computer, must mean
| that as a computing device the iPhone has very little
| taste...
|
| Which in a sense I can definitely see...
| jiggawatts wrote:
| iPhones are definitely aimed at the more discerning, up-
| market customer. Android meanwhile is for the mass-
| market.
|
| iPhones have four levels of encryption designed to thwart
| the likes of the FBI trying to get data out of your
| confiscated phones. Androids have a checkbox tick that
| basically says "Encryption: Yes".
|
| iPhones have 1000-nit OLED HDR screens that are colour-
| managed and calibrated out of the box, and have Dolby
| Vision HDR system-wide.
|
| Etc, etc...
|
| iPhones are for people that actually care about their
| privacy, aren't blind, and appreciate the "small
| touches". Androids are for people that don't mind
| factory-installed crapware, _as long as it 's cheap_.
| jjeaff wrote:
| You aren't really comparing apples to apples here.
| Android is an open source operating system used by dozens
| of different hardware vendors. Crapware is only installed
| by some vendors. And iphone rarely has the best displays.
| They usually trade places with a few other Android
| vendors for best camera. As for security, iphone usually
| is the best. But it varies with different Android vendors
| in how well or how poorly they implement security.
| jiggawatts wrote:
| The iPhone 13 literally has the best display currently
| available, and more importantly, it's colour managed
| correctly. It is manufactured by Samsung, and they use
| the same panel in their own flagship phone, but they
| don't colour-manage as well or as consistently, making
| the iPhone the overall winner in my book. Other Android
| manufacturers have markedly worse displays in every
| metric.
|
| The fact that you don't appreciate this just reinforces
| my point: you don't happen to have "taste" in phone
| screens. That's okay! I have bad taste in cars, wine,
| sport, and a bunch of other stuff.
| kcb wrote:
| My android phone has a folding screen that allows it to
| double as a tablet. Checkmate.
| sudosysgen wrote:
| Actually, it just doesn't. Firstly Apple doesn't just use
| Samsung, they also use BOE and LG panels, so they'd have
| to be calibrated to the lesser of either.
|
| Unless there is massive unit variance, which is even
| worse.
|
| So much for taste :)
| jazzyjackson wrote:
| wtf is dolby-vision HDR? Sounds like cheap marketing crap
| like "Extra Bass Boost"
|
| I rock an iphone because the SE is cheap and the camera
| is good, if I cared about privacy I wouldn't have a phone
| with always on microphones and cameras...
|
| NSO group's Pegasus was cross-platform, so as far as I'm
| concerned the security point is moot, people buy iphones
| and androids for various reasons, and it's easier to
| judge someone's "upmarketness" by the stickerprice of
| their flagship, not the OS it runs...
| jiggawatts wrote:
| HDR10 is the crap Samsung invented, which just extends
| 8-bit colour to 10-bit colour (from 256 shades of
| intensity to 1024). This is _not enough_ to display
| smooth gradients when going from the blackest blacks to
| the brightest whites that a high-dynamic range (HDR)
| screen is capable of. Hence, it causes visible banding,
| especially in "blue skies" or similar smooth areas of
| slowly changing colour.
|
| Samsung worked around this by applying a post-processing
| filter that smooths out the banding... sometimes. It also
| almost always smooths away fine detail, ruining the 4K
| details. (Similarly, their 8K screens appear _less
| detailed_ than some 4K screens for other but equally
| silly reasons.)
|
| Dolby Vision uses a more optimal allocation of signal
| "bits" to the spectrum of colours and intensities visible
| to the human eye. The ideal is that each colour and each
| shade would be perfectly evenly distributed, so that
| "512" would be exactly half as _perceptually_ bright as
| "1024", etc... The Dolby Vision encoding does this very
| nearly perfectly, eliminating visible banding without
| having to hide them by smudging the decoded picture. This
| optimal colour-volume encoding also means that transforms
| like scaling or brightness changes don't introduce
| colour-shifts or relative brightness shifts.
|
| If you've never seen a DV video taken with an iPhone Pro
| 13 displayed on its OLED, you just don't know what you're
| missing. Go to an Apple store and play with one for a few
| minutes.
|
| But seriously, companies like Samsung like to shave 50
| cents off their flagship products by not paying DV their
| licensing fees. They figure that cutting corners like
| this doesn't matter, because most customers have no taste
| in image quality anyway, and just want BRIGHTER! COLORS!
| and nothing else.
|
| They're right.
|
| You don't care, and you're happy to save 50c on a $10K
| television or a $1K mobile phone.
| fomine3 wrote:
| Don't group $50 device and $2000 device as "Android".
| jiggawatts wrote:
| You can buy a $2000 Android loaded with crapware.
| AnyTimeTraveler wrote:
| You can also buy a $50 Android without.
|
| What's your point?
| musicale wrote:
| > Sugary, salty and fatty foods are consistently rated by
| typical people as more tasty than foods without them
|
| Sweet, salty and/or fatty tastes form a pretty solid basis
| for many delicious snacks/hors d'oeuvres/desserts -
| highbrow or lowbrow - though I personally like tangy as
| well as textures like crunchy, creamy, chewy, spongy; and
| sometimes other tastes like bitter, savory, or piquant as
| well. These are tastes that humans (and other creatures)
| have developed and retained over thousands of years.
|
| Omitting sweet/salty/creamy greatly reduces the scope of
| cuisine.
| RuggedPineapple wrote:
| Or they just have different tastes then you do, which is a
| far cry from having 'no taste'. People consistently prefer
| and rate headphones with more bass as more appealing, for
| example. That's why consumer brands are bass-heavy. It
| matches the taste of the market. If you need a flat audio
| profile where the mids and highs and bass are all at the
| same level you have to pick up a pair of studio monitors.
| jjeaff wrote:
| There is nothing wrong when it comes to subjective taste.
| However I think there is some level of objectivity to
| many things that can be applied to an extent. For
| example, if there is so much bass that much of the other
| frequencies are not audible, then I think it is an
| objectively bad setup. Or if your food is prepared with
| so much sugar/salt/fat/seasonings that you can't even
| taste the main ingredient, then it's objectively not very
| good (or at the very least, a waste of the main
| ingredient).
| Ma8ee wrote:
| > If you need a flat audio profile where the mids and
| highs and bass are all at the same level you have to pick
| up a pair of studio monitors.
|
| Which is what many of us do.
| Philip-J-Fry wrote:
| Gimmicks? Something that a company needs to invent to keep
| selling new versions of their product.
| Gigachad wrote:
| For a camera it would just be referred to as post processing.
| You can even see some of this going on when you open the
| photo immediately after taking it and see it snap in to high
| quality later. Or the difference between the live viewfinder
| and the final image.
| formerly_proven wrote:
| The pendulum seems to be coming back on that one with the
| "Filmmaker Mode" and all that.
| mc32 wrote:
| Reality isn't good enough in the time of Instagram.
|
| People want perfect pictures to reflect their mind's eye.
|
| In a way this is the way in for a metaverse. Reality isn't what
| they seek. They seek an alternate state that is a mix of
| reality and fantasy.
| ShroudedNight wrote:
| Hypermediocrity in the flesh. I expect the depressing outcome
| for most is to discover that despite having traded our flawed
| execution for that of a computer, most of us can't even
| conceive of an existence masterful enough to be worthy of
| accumulating the abnormally high social status we seem wired
| to crave.
| rimliu wrote:
| What is "real"? Do you count infrared? Ultraviolet? What if you
| are color blind? For me photo is how I see it, not some
| "representation of reality" whatever that reality may be.
| m-p-3 wrote:
| I'd hate to see a photo altered to the point where it could
| have a significant outcome in a trial. Imagine if the ML
| improvement lead to a photo where something shows up that
| wasn't there or vice-versa.
|
| How can you trust the picture taken if it might not reflect
| reality?
| thaumasiotes wrote:
| > how the sky looks clearer than it really did
|
| See also: https://www.axios.com/san-francisco-orange-sky-
| smartphone-40...
|
| You should be allowed to photograph what reality looks like if
| you want to.
| SturgeonsLaw wrote:
| As recently as last night, I was considering selling my DSLR
| and lenses since the quality of my phone camera is just so
| good, but this has changed my mind. There is something nice
| about it taking a shot verbatim and letting me decide how to
| postprocess the RAW.
|
| Plus I'd only get a few hundred bucks for it, and the tactile
| pleasure of a big, heavy ka-chink when the shutter button is
| pressed is worth more than that for me :)
| mrtksn wrote:
| It simply means that we no longer have measuring instruments
| who are used to draw accurate representation of the scene but
| seed samplers who are used to generate a representation of the
| scene, not necessarily accurately but artistically. Accuracy
| used to be the metric but someone figured out that most people
| are not after accuracy.
|
| IMHO it's not fundamentally evil, it's just that it's not the
| thing we are used to. Wouldn't have caused a confusion if they
| used some other word instead of photograph.
| user-the-name wrote:
| Cameras have never, ever been "accurate". It is not
| technologically possible to create a photograph that is
| "accurate". Cameras have always made big tradeoffs to output
| something that actually looks good to humans.
| ixfo wrote:
| Well, no, it's not _perfectly_ possible to recreate a
| singular human vision system and capture and reproduce
| imagery to match that.
|
| But actually, we have lots of excellent, well-researched
| and proven standards for accuracy in imaging. Cameras
| generally target those standards, certainly professional
| ones. Many cameras - quite clearly - can produce very
| accurate photographs.
|
| The more worrying trend is that of pervasive post-
| processing, where we depart from reality and entertain
| aesthetics.
| user-the-name wrote:
| > Well, no, it's not perfectly possible to recreate a
| singular human vision system and capture and reproduce
| imagery to match that
|
| How?
| bigyikes wrote:
| Sure, but there is a major difference between color
| correction and, say, replacing a head with a leaf.
|
| Historically cameras have merely altered the style of
| images. These days smartphone cameras are altering the
| _content_ of images.
| mr_toad wrote:
| Most modern cameras have noise reduction, stabilisation,
| the ability to combine multiple exposures and track
| moving targets. They might not push the envelope as much
| as a cellphone, but they're only a few years behind.
| Zak wrote:
| For most phones, there's the possibility of using a third-party
| camera app with less automation, and even capturing in a raw
| format (almost always dng). Open Camera for Android is an open
| source option.
|
| Of course getting good results with this approach requires the
| user to have more knowledge of and practice at photography.
|
| https://opencamera.org.uk/index.html
| mynameisash wrote:
| Fully agree. My wife and I went out on a date about a month
| ago, and during it, she took a selfie of the two of us. There
| must have been some filter on by default because our faces
| looked perfectly lit, our skin completely blemish-free, no
| smile lines, etc. It was a great picture, but I remarked
| immediately that it didn't look real. And I don't want that --
| it's not us but an idealized, optimized version of us.
|
| I similarly have mixed feelings about what I've seen lately of
| the deep learning that 'restores' very old images to incredible
| quality. But that quality is fake. I'm sure there's a tug at
| the heartstrings to see a crisp image of your deceased father
| from his high school days, but to me that seems a bit
| revisionist. I don't know. I guess I'm just uneasy with the
| idea of us editing our lives so readily.
| lathiat wrote:
| Lots of phones have a selfie beautification mode now.
|
| Even Apple had appeared to add one in the iPhone XS/iOS12 but
| was apparently an issue with Smart HDR and was rolled back.
| But many Android phones advertise it as a feature and it's
| something many filters etc do.
|
| It's also possible some HDR type functionality causes this on
| other implementations.
| hazza_n_dazza wrote:
| my kid brought home a picture of them taken at school by the
| school. It didn't look like her. the shiny smiley filters
| were not real happiness. cameras replacing happiness that is
| there with happiness that isnt there is quite a delusion.
| eclipxe wrote:
| Are you sure it was a filter or just a lower-quality front
| facing camera that didn't capture details in low light like
| blemishes and smile lines? In an effort to reduce noise,
| sometimes a camera over-smoothes the image - not as a way to
| make your imperfections disappear, but to make the noise from
| the high ISO shot disappear.
| jeroenhd wrote:
| My phone has a button in the camera app labeled "AI" and it
| does exactly this. Even in low light conditions you can see
| the differences between denoised and smoothed skin.
|
| It's also clearly optimized for Chinese faces, which makes
| for some comedic side effects sometimes when it tries to
| apply what seems to be the beauty standard in China to my
| very much non-Asian face structure.
|
| Sadly, there's no differentiation between the stupid face
| filter and the landscape-fixing AI. I like the AI
| processing for a lot of static scenes where there are no
| people around, because the small camera sensor simply can't
| catch the necessary details, but I always forget to turn it
| on because of the stupid face filter that comes with it.
| travisgriggs wrote:
| First we have the uncanny effect from trying to imitate real
| life too much. It will be interesting if the effect shows up
| on the other side of modifying away from real life imagery as
| well.
| noizejoy wrote:
| To be fair, human memory also liberally edits.
|
| And historical representations also undergo changes, like the
| colour fading on old pictures, paintings and statues. And
| that's in addition to all of the issues in capturing accurate
| colours and other details in the first place. Add a bit of
| optical illusion to at least some imagery and the entire
| question of historical accuracy becomes very messy very fast.
|
| A prime example is astro-photography, where most of the well
| known imagery isn't and may never be seen like that by even
| future evolved human eyes.
|
| Photos are limited representations, just like they've always
| been.
|
| But it's understandable, that different individuals would
| prefer differently prioritized representations. And maybe
| that's the next generation of tools. Give more choice, more
| facial wrinkles or fewer. More lighting and colour
| enhancements or less. etc.
| madeofpalk wrote:
| I'm curious what model phone you have. _In the past_ Samsung,
| Google Pixel, and Apple phones all had their own approaches
| to computational photography and would all take photos in
| their own "style". Samsung would priories for vibrancy and
| clear faces, Apple would try for "most correct" but often got
| quite flat photos, and the Pixel managed to do with a
| middleground.
| tobyjsullivan wrote:
| This brings new meaning to the old quote, "history is written
| by victors." Today, the victors are Apple, Google, etc., and
| they are writing their own version of history as its
| recorded...
| hughrr wrote:
| Nope. Give me a 40 year old East German camera and a roll
| of Ilford and I'm still perfectly capable of corrupting
| reality intentionally.
| mc32 wrote:
| They are the means, but the "victors" are people's vanity.
| If people didn't want blemish free pictures, they wouldn't
| be offered --but airbrushing was a thing and old fashioned
| paintings also tended to skip the blemishes --unlike
| mirrors. So, today, this continues albeit more perfect and
| automated for our consumption.
| noizejoy wrote:
| Bad vision airbrushes even mirrors. :-)
| mitchcohentoo wrote:
| Mystery solved!
| https://twitter.com/mitchcohen/status/1476951534160257026
| space_rock wrote:
| So now we can't trust our photos to be fake? Let's get rid of the
| ML. Optional post processing only. This is why an iPhone photo
| won't be able to be used in court
| gfykvfyxgc wrote:
| Apple would tell you this is actually the photo you wanted to
| take.
| literallyaduck wrote:
| Remember the Rittenhouse trial and the big to do about the video
| being enhanced when zoomed? Digital evidence and photos are
| suspicious for the purpose of evidence. Not commenting on the
| results of the trial.
|
| Sidebar not specific to that trial, slow motion, and slowed video
| shouldn't be shown to jurors because it creates the illusion of
| having more time to think. Everything in real life happens at one
| speed, you can't back it up, slow it down, or armchair qb it then
| decide if an action was appropriate.
| mikewhy wrote:
| > the Rittenhouse trial and the big to do about the video being
| enhanced when zoomed
|
| I keep thinking about this, and was already when the case was
| going on. If linear interpolated zoom isn't allowed, no iPhone
| photo after a certain time should be.
| mateo1 wrote:
| Well, it appears that we are rapidly approaching the point in
| time when these companies are no longer going to be able to
| deceive consumers like this. After that there will be an off
| button and perhaps legally mandated metadata, although I'm
| sure forensics people can already tell when AI trickery has
| been applied.
| wmf wrote:
| _If linear interpolated zoom isn 't allowed..._
|
| Think about analog/optical/chemical photography. In the old
| days did juries look at camera negatives with a loupe? Of
| course not, they looked at enlargements. What "algorithm"
| does an optical enlarger use?
| tagoregrtst wrote:
| Continuously linear implemented with an analog device vs
| digital "linear" (ie, watch your floats! Take care of
| quanization error! Have you kept the colors separated
| according to the Beyer pattern?).
|
| No one has a problem with a mathematically perfect linear
| transformation, and film enlargers come very close to that
| ideal (yes they distort, but in a very obvious way and by
| degrading detail not adding detail that isn't there)
|
| The analog picture is much harder doctor. More gracious
| artifacts in blow up (grains are random, versus sharp
| grid). Much more detail is recorded (by virtue of the size
| of the sensor and therefore the diffraction limit. Sensor
| resolution is not too useful).
| sudosysgen wrote:
| Film development is actually VERY, VERY far from linear.
|
| Because of film grain, it's not continuous either, but
| actually discrete too.
| tagoregrtst wrote:
| Film development is very non linear in the exposure (call
| it z dimension), not in the dimension (x and y). That is
| to say it might exaggerate or diminish a gradient that
| was already there but not create one from nothing.
|
| The grain is random size and randomly distributed which
| cancels out a lot of the effects of discritzation (eg you
| wont get patterns due nyquist sampling error).
| sudosysgen wrote:
| It's non linear in the X and y dimension because of
| grain.
|
| Any camera with a good antialiasing filter will also have
| little to no discretisation error.
| tagoregrtst wrote:
| Its non linear because enlarger optics are non linear.
|
| The image of a digital or a film photo is a mosaic. This
| mosaic can be mathematically enlarged into a large
| mosaic. If you enlarge far enough you'll see the
| individual tiles. This is a linear operation no matter
| the shape of the tile.
|
| Digital photos do not, however, just make the tiles
| larger. They could but its not done.
|
| Even before enlarging, they interpolate between tiles to
| recover color (each pixel in the sensor is
| monochromatic).
|
| When a picture is displayed, the screen resolution is not
| that of the photo, so an algorithm has to fit one grid
| into another. And this is before going into superesoltion
| techniques.
|
| But _none_ of this would matter if we had a standard,
| open source, way to utilize digital photos in court.
| Until then Mr. lawyer can get himself an expert to
| testify to the validity of each and every still he wants
| to show the court.
| dagmx wrote:
| Film development is actually quite a bit more subjective.
| There's a much larger variance in film type, and the
| chemical process.
|
| Perhaps it's harder to doctor, but it's also not
| necessarily truer either.
|
| With regards to sensor size and detail recorded...well
| that depends. Are you assuming 35mm sensors? Because
| people shot 8mm and 16mm too back in the day. That's not
| far off from smaller sensors today. Are we also
| accounting for film sensitivity? Because digital sensors
| have far eclipsed the sensitivity range of most common
| film types now, so would be more likely to resolve image
| data.
|
| It's not so cut and dry.
| tagoregrtst wrote:
| Sensitivity of digital is amazing. You cant really get
| past ISO 400 on film without large compromises.
|
| But, as for size, photo cameras sporting film smaller
| than 35 were rare. Yes Kodak had the advantax (?) system
| and some other weird cameras here and there, but the vast
| majority of consumer pictures were taken on 35mm.
|
| As to the subjectivity of film, as I mentioned in the
| other post, most of the subjectivity came from what I
| called the "z" dimension, i.e exposure. There was little
| subjectivity about the enlargement itself.
|
| That is to say, the subjectivity was largely limited to
| the contrast and brightness sliders of today. Anything
| else is far more difficult to do with film.
|
| There is another advantage for digital, cost. Video was
| much more rare with film, and the video we're talking
| about certainly would not exist.
|
| But I think that's the greatest advantage of film - it
| contains within it an inherent protection of the public's
| privacy completely absent in our society today
| djrogers wrote:
| > But, as for size, photo cameras sporting film smaller
| than 35 were rare.
|
| Strong disagree here. 126 film is what brought color
| photography to the mass market, and others like the Kodak
| Disc were wildly popular among point-n-shoot users.
|
| 35mm was standard for pro photogs, but consumers went for
| convenience.
| djrogers wrote:
| I should also point out that while 126 popularized color
| photography, a decade later 110 came along and largely
| dominated consumer photography until the digital
| revolution.
| tagoregrtst wrote:
| 126? 110 (other comment)? By that token betamax was a
| successful format in 1975 before VHS launched.
|
| 35mm was the top selling film since the 1960s until
| digital took over.
|
| You went to any drugstore and you found 35mm. Maybe a box
| or two of APS. Maybe some medium format. But you could
| find 35mm in every drugstore, gas station, street vendor
| in a tourist trap without fail.
|
| Professionals used 35mm because it was snaller than MF
| and for many things good enough. Otherwise it was
| considered consumer grade.
| dagmx wrote:
| Specifically, this thread originated with the Kyle
| Rittenhouse trial which would be video. So for the
| average person, it would be 8/16mm.
|
| Even for stills, 110/126 was very common.
|
| As for your last point, it only ensured privacy from the
| poor. Privacy was always invaded by those with means like
| paparazzi.
|
| There's also the flip side that the prevalence of digital
| has let people capture pivotal moments they wouldn't have
| been able to otherwise, including generation defining
| moments like the murder of George Floyd.
| tagoregrtst wrote:
| Privacy for the poor, not from. Paparazzi have never been
| outside my door. They were taking pictures of rich people
| for gossip mags (ie poor consumers).
|
| Now we spy on poor people, use AI to analyze the photos
| at scale, while the rich got anti-paparazzi laws put in
| places like EU and CA.
|
| In this context, the Rittenhouse video would not exist
| and it would be better if it didn't. The prosecution,
| arguably, should get disbarred for the shenanigans they
| pulled (no discovery and dumping it in the last moment,
| giving a modified version, lying about the provenance).
|
| As to George Floyd, cameras have done much more to erode
| our civil liberties than they have put bad cops away.
| boublepop wrote:
| I think it's ok to select jurors based some basic abilities. I
| don't know if this considered discrimination based on IQ in the
| US. But if a grown person has the mind of a 5year old or
| doesn't know how to walk up stairs I think it's fair to say
| "you shouldn't be in a jury". Likewise if a person doesn't
| understand the concept of zooming an image or slow motion, then
| I think it's quite fair to exclude them from jury duty.
| literallyaduck wrote:
| Understand concepts and removing bias are totally different.
|
| Consider:
|
| https://www.latimes.com/science/sciencenow/la-sci-sn-slow-
| mo...
|
| https://law.temple.edu/aer/2021/02/08/selectively-
| trusting-s...
|
| It is also clear that digital photography from phones is far
| from an accurate representation of events.
| aimor wrote:
| I had a problem where letting iPhones automatically touch up
| portraits taken on a dedicated camera would result in black blobs
| being inserted over the whites of the subject's eyes. That's just
| to say that these problems are common and to watch out for it on
| your own photos. When you know it can happen in subtle ways
| you'll pay more attention and you'll start to see big errors in
| images you thought were fine at first glance.
| iandanforth wrote:
| Three dots instead of four.
| alboy wrote:
| I take it the processing model was trained on a dataset of
| Magritte's paintings?
| bitwize wrote:
| My guess is this is some sort of AI driven compression. The
| compressor presented the neural network with images of the
| woman's head and some background leaves and said: "Corporate
| needs you to tell the difference between these two pictures." The
| neural network happily replied, "They're the same picture". And
| so the compressor discarded the woman's head from the data and
| replaced it with the leaves, which have unerringly been found to
| be equivalent.
| cdaringe wrote:
| Man takes photo of leaf, surprised to see leaf in photo
| avrionov wrote:
| I've been thinking a lot about this recently. The mobile phones
| photography went in a direction which makes the mobile
| photography quite questionable. ML algorithms can't be tested
| completely, there are going to be many corner cases where results
| like the above will be produced. The algorithms are trying to
| compensate for the limitations of the small sensors, limited
| lenses and the hands movement, but we end up getting images that
| are completely artificial in many cases. Multiple images are
| combined with data which just made up by the ML models.
|
| Better hardware (sensors) could solve some of this I hope.
| rurp wrote:
| I recently got a Pixel and was disappointed to see that the stock
| camera app has no way to control the focus manually. I guess
| Google is so impressed with their post-processing they think
| users won't need any control.
|
| Turns out, their software isn't that great at focusing in many of
| the photos I take, so I need a different camera app. My only
| requirements are: 1. Manual focus 2. Wide angle lens support
|
| I thought a replacement would be trivial to find but I've tried a
| bunch of apps at this point and haven't found a single one that
| checks both boxes.
|
| Can anyone recommend a basic android camera? Preferably one that
| doesn't distort the world too much or replace faces with leaves.
|
| I've been using Open Camera which is pretty great, but doesn't
| support wide angle lens shots as far as I can tell.
| lucb1e wrote:
| > Can anyone recommend a basic android camera? Preferably one
| that doesn't distort the world too much or replace faces with
| leaves.
|
| Everyone's been complaining about the Fairphone not doing
| enough image processing and giving you basically the raw sensor
| output, making the pictures look bland compared to bigger-brand
| phones with worse sensors. So there's that option if you want a
| true-to-sensor picture and they also have a great open source
| community so you can also know what effects the camera app is
| applying.
|
| Or install an open source camera on any android and at least
| that aspect of the picture-taking chain is covered.
|
| If you want a terrible UX experience but reasonable amount of
| features, there's Open Camera: https://opencamera.org.uk/ |
| https://f-droid.org/en/packages/net.sourceforge.opencamera/
|
| If you prefer better UX with just basic features (point and
| shoot, no knobs to turn), Simple Mobile Tools has an app for
| that: http://simplemobiletools.com/ |
| https://f-droid.org/en/packages/com.simplemobiletools.camera...
| Zak wrote:
| Does Open Camera not show the switch multi-camera button on
| your phone? From https://opencamera.org.uk/help.html
|
| _Switch multi-camera icon - This icon only shows on devices
| with more than one front and /or back cameras, and allows you
| to switch between those cameras. For example, a device might
| have two back cameras, one standard and one ultra-wide, this
| icon will switch between the standard and ultra-wide camera. If
| Settings/On screen GUI/"Multiple cameras icon" is disabled,
| then this icon will not show; instead the "Switch camera" icon
| can by used to cycle through all the cameras. Note that some
| devices do not allow third party applications to access their
| multiple cameras, in which case Open Camera isn't able to use
| them._
| user_7832 wrote:
| Not the person you were replying to but I also have a Pixel 5
| (still on Android 11). I didn't realize it until the previous
| comment but I also cannot access the wide angle camera. The
| button simply doesn't exist. Other 3rd party apps like
| Lightroom also couldn't find the 2nd camera. But a gcam mod
| finds it unsurprisingly.
|
| To answer op's grievance though the wide angle is fixed
| focus, so they could use gcam for the wide angle and open
| camera for the regular lens.
| colechristensen wrote:
| Filmic pro is pretty good and supports manual focus and
| selecting which camera to use
| TedShiller wrote:
| > Can anyone recommend a basic android camera?
|
| I can't, unfortunately
| layer8 wrote:
| I wouldn't completely exclude the possibility that a random bit
| flip caused the ML processing to go haywire.
| 1_player wrote:
| The probability of a bit flip enabling the leaf-replacer logic
| instead of causing a weird heisenbug and just crashing the
| camera app is astronomically low.
| layer8 wrote:
| Just crashing the camera app won't end up on Twitter and HN
| though.
| DSMan195276 wrote:
| Yeah but how many iPhone pictures are taken every day?
|
| I'm not saying I'm convinced, but even something with
| 1/1,000,000,000 odds isn't really out of the question for an
| action that must happen at least millions of times a day.
| 1_player wrote:
| Your probability estimate is off by a dozen orders of
| magnitude. As a back of the napkin estimate, an iPhone with
| 8GB has 64 billion bits that can be flipped. One bit flip
| per week per user would already be extremely bad, as it
| would mean on average one random crash or case of data
| corruption per week.
|
| If you suppose the bit flip happens in the CPU registers,
| the number of state changes on a modern CPU is so
| incredibly huge that a random bit flip doesn't just cause a
| memory load from a wrong address or a crash is, again, much
| less probable than 1/1e9 odds, by a lot.
|
| And as explained in my other comment, you'd have to show
| that this behaviour can be expressed in one bit. Is there a
| bit anywhere in the iOS hardware and software that by
| changing from 0 to 1 can run a code path that replaces your
| face with a leaf? I doubt that.
|
| I stand by Occam's razor. This happens because it's a bug
| introduced by its programmers rather than a random gamma
| ray that turned a bit to 1.
| dannyw wrote:
| also most cpus have ecc on registers.
| 1_player wrote:
| I didn't know that, but it makes a lot of sense.
| Gigachad wrote:
| There are billions of iphone users taking photos every day.
| Astronomically rare things happen regularly.
| 1_player wrote:
| Yes, bit flips are quite rare already, in the grand scale
| of things. So a bit flip that causes that specific
| behaviour is exponentially rarer.
|
| And you have to prove that there actually is one bit that,
| when changed, causes faces to be replaced by background
| objects, in this case leaves.
| vorpalhex wrote:
| Is it possible to opt in to this feature? I would love it if any
| unauthorized photos of me had me replaced with background
| scenery.
| NavinF wrote:
| Sounds like you need to watch the White Christmas episode of
| Black Mirror.
| davidhariri wrote:
| This title should be updated. The author has since figured out
| that it was in fact a leaf obstructing the person's face.
| [deleted]
| dt2m wrote:
| I've noticed some sort of image post-processing on the newer
| iPhones that removes noise and graininess, and instead adds this
| fake smoothness to all pictures. Haven't found a way to disable
| it, save for shooting in RAW, which is impractical due to file
| size.
|
| Really disappointed that this seems to be a forced setting.
| jcun4128 wrote:
| I have a cheaper phone that has this, makes your face look
| weird, it's too smooth
|
| LG Stylo 6 has "AI cam"
| berkut wrote:
| I've had this (very agressive de-noising I think it is - it's
| at least almost identical) since I got my iPhone 6S in 2015:
| basically if you look at 1:1 (i.e. on a computer, as opposed to
| the small screen of the phone), it almost looks like a
| watercolour painting, due to how agressive it is.
|
| You can pretty much see it in almost all iPhone camera review
| sample images (and that of phones from other manufacturers).
|
| Even in photos taken in direct bright sunlight!
|
| I imagine it has an added side 'benefit' (due to the lack of
| noise/grain) of decreasing the images' sizes after compression.
| dannyw wrote:
| That's exactly why I switched to a Pixel.
| [deleted]
| rubatuga wrote:
| I sometimes use the NightCap app for photos, and it doesn't
| have that AI bullshit.
| ezconnect wrote:
| This is worse than the Huawei moon scandal.
| warning26 wrote:
| Maybe the person really _is_ leaves, and we 're all just blind to
| the truth
| ineedasername wrote:
| Just like the fnords.
| Lamad123 wrote:
| agree
| pvaldes wrote:
| <code for turning more people into sleaves of the iphone starts
| here...>
| resters wrote:
| It was actually a leaf from a foreground tree, not an image
| processing artifact:
|
| https://twitter.com/mitchcohen/status/1476951534160257026
| dqpb wrote:
| fxtentacle wrote:
| That looks to me like they are using deep learning with CNN for
| denoising. NVIDIA OptiX can produce similar artifacts.
|
| However, it appears they forgot to add a loss term to penalize if
| the source and the denoised result image turn out too different.
| NVIDIA's denoiser has user-configurable parameters for this
| trade-off.
| ladberg wrote:
| I think it would be impossible to train the model in the first
| place without that loss term.
| osivertsson wrote:
| I sometimes hang out on mountainbike forums.
|
| A user posts that despite a new chain and setting up their rear
| derailleur according to instructions gear-changes are clunky or
| the chain skips when applying power on the pedals.
|
| The usual reply from the forum members is then to send a photo of
| the cassette since excessive wear there leads to these problems.
|
| This used to clearly show that the user needed to replace their
| worn cassette too.
|
| Lately these photos of cassettes can look all mangled. Like ML
| tried to "enhance" mountains or the back of an alligator or some
| other structure from nature on to the cassette.
|
| This has to be a problem in all sorts of technical trouble-
| shooting.
| PragmaticPulp wrote:
| I had a 12-speed cassette nearby so I photographed it several
| times (iPhone) and I can't see any problems.
|
| Is it possible you're seeing compression artifacts from the
| forum software? Do you have an example?
| internet2000 wrote:
| Focusing on the wrong thing? That's not a ML issue.
| mc32 wrote:
| They need to offer an option for a "raw" image as they used to
| be called --which were minimally processed and were not
| compressed.
| CarVac wrote:
| The problem with phone raws is that either:
|
| 1. They're not raw, they're aligned and merged and denoised
| and sharpened just like this iPhone photo (they're just
| linear so you can adjust white balance and recover shadows,
| working with them the way you could a real raw file)
|
| or
|
| 2. They look like total garbage, noisy as heck and horribly
| soft, because the sensors are tiny and the lenses are
| simultaneously limited by diffraction and refractive error.
| kevincox wrote:
| The raw mode for these wouldn't be a single image. It would
| be a video with lots of low-quality frames. Then your "raw
| processing" would be deciding how to align and Marge the
| frames, which to drop...
|
| I think the idea that raw is a 2d array of pixels is
| outdated by the way modern cameras, especially smartphone
| cameras work.
| maxerickson wrote:
| They just need to offer an option without aggressive
| processing, they don't need to try to make it competitive
| with dedicated cameras.
| Razengan wrote:
| iOS does
| jakear wrote:
| How do I get this? Sick of seeing oversaturated landscapes.
| Edit: why is this downvoted? honest question... I'm on iOS
| and didn't see anything in settings.
| mc32 wrote:
| Some models of iPhones the "PROs"[1] support something
| they call "Pro Raw" which isn't actual RAW images: "Apple
| ProRAW combines the information of a standard RAW format
| along with iPhone image processing, which gives you more
| flexibility when editing the exposure, color, and white
| balance in your photo."
|
| So it's better than JPEG but not as good as RAW.
|
| Anyhow apparently Apple supports quasi raw at the system
| level but doesn't always get exposed to the user. but 3rd
| part apps can take advantage of this even for non "PRO"
| Apple telephones.
|
| [1]https://support.apple.com/en-us/HT211965
| gnicholas wrote:
| Lots of details here. [1] Basically, things changed when
| Apple released the iPhone 12. Earlier phones have regular
| RAW, and later phones have ProRAW, which is processed. It
| sounds like you can still get access to the regular RAW
| files on newer phones, but it's not simple.
|
| 1: https://nocamerabag.com/blog/shoot-raw-iphone
| jagger27 wrote:
| If you're serious about photography on iPhone you should
| get Halide.
| hu3 wrote:
| Android does
| wrboyce wrote:
| I'd be curious to see this, could you link to some examples?
| Cheers.
| hughrr wrote:
| Well it's in telephoto and 1/121 exposure so the photographer was
| probably wobbling around like mad when it was taken and the
| overlay and computational image stuff got confused.
|
| I'm fine with this. I use a mini tripod with my 13 pro on
| telephoto. Back in the old days this would just look like ass
| instead.
| Dah00n wrote:
| I would rate any camera that does this as 1/100. It cannot
| possibly be a worse camera. How anyone that have an interest in
| photography can see swapping a head with a leaf as just fine is
| beyond me. It's no better than using Snapchat like filters and
| calling it raw. It's is utterly broken and should be removed
| ASAP from all iPhones.
| dntrkv wrote:
| Are you suggesting that if you have a product that performs
| better than the previous version 99.999999% of the time, but
| that 0.000001% of the time it performs terribly, in your
| opinion, the new version is worse?
| hughrr wrote:
| I've taken about 8000 photos from iPhone 6 to 13pro and
| sifted through them all manually within a day or so of taking
| them and haven't seen any anomalous things yet.
|
| The error margin is tiny and the benefits to the output are
| huge. My brain has more trouble accurately portraying things.
| This is not much of a problem.
|
| Reality is really transient and inconsistent anyway.
| jlnthws wrote:
| Yes that's the correct explanation: 1/120 speed for 220mm focal
| length equivalent is far too slow, at least 1 EV. The
| stabilization is probably not on par with big pro lenses and
| SLR so you might offset 1 more EV. Now the way the AI is fixing
| this seems obviously wrong but it's "best effort". I'm guessing
| it has taken several pictures and merged them. This can't be
| right 100% of the time. Maybe they should add a "no smart self-
| similarity merges" option, I would use it most of the time.
| root_axis wrote:
| Why does everyone automatically assume this explanation is
| correct? Based on the photo, it looks to me like a real leaf in
| the foreground, probably having fallen at the perfect time to
| create this photo. I would be curious to hear Apple's explanation
| on this...
| vmception wrote:
| > Why does everyone automatically assume this explanation is
| correct?
|
| because they want to. thats how the internet works right now,
| it mirrors your greatest gripe or nightmare, which is exactly
| when you should have the most skepticism.
| krapp wrote:
| Except the leaf isn't so close to the foreground that it
| completely obscures the person's head, and there is clearly
| other distortion/artifacting in the image. Something more
| than that is going on.
|
| Useful skepticism actually takes evidence into account,
| rather than dismissing claims at a glance, or because "thats
| how the internet works right now."
| vmception wrote:
| https://twitter.com/mitchcohen/status/1476951534160257026
| krapp wrote:
| Yep, fair enough I have to eat those words.
|
| Although to be fair(er), people didn't believe otherwise
| because, as you insisted, they were buying mindlessly
| into some internet hype/rage generator "mirroring their
| latest gripe or nightmare." The original photo did look
| odd and it generated a lot of interesting discussion.
| Honestly, it still does to me.
|
| While that reasoning was (kind of) correct (the leaf
| didn't fall at just the right time, it was attached to a
| branch,) both you and root_axis above assumed people were
| being driven by stupidity rather than curiosity, which is
| still an attitude we could use a lot less of.
| vmception wrote:
| If it must be said, I do think there is some merit to
| what you are saying.
|
| Now for my defensive response: stupidity of the
| respondents isn't my assumption, its their lack of
| considering other possibilities. user error of OP and the
| assumption OP went with does verge more closely towards
| something I would call stupid. but these are your words,
| I'm saying it was an obvious circumstance to ignore the
| crowd specifically because of internet crowd trends. And
| unsurprisingly to me, in hindsight, that turned out to be
| correct. The internet mirrors the desired predisposition
| because people troll, a lot.
| emerged wrote:
| Well to be fair, its incredible how far head to leaf technology
| has come.
| omk wrote:
| Few things seem to be at play here.
|
| 1. iPhone uses face tracking to adjust focus on subject's face
|
| 2. Same face detection can end up detecting faces in arbitrary
| objects. Very common occurring. Here the leaf.
|
| 3. iPhone uses multiple lens to compose a single photo. 1 lens
| focused on the persons face while another with better focus on
| the face estimated the face to be within the leaves.
|
| 4. The composite photo now has picture of leaf replacing the
| person's face.
| 01acheru wrote:
| Heads turn into leaves, windows turn into Ryan Gosling, the power
| of image post processing using ML.
| chrstphrknwtn wrote:
| At some point the images from phones are no longer photographs...
| but some kind of photo-illustration.
|
| Taking landscape shots with new iPhones creates some very intense
| over-saturation of skies and some general HDR-like balancing
| (even with HDR setting turned off).
|
| I understand that phone makers want to give people devices that
| "make the best photos" but it does sometimes feel like the image
| processing is going to far, and producing a representation of
| reality that is weirdly unrealistic.
| Jenda_ wrote:
| So... when are we going to see made up numbers in photographed
| documents again?
| https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...
|
| Superimposing other's people face onto a photo from a crime
| scene?
|
| Or, as part of my work, I photograph electric switchboards for
| documentation and diagnostic purposes. Hopefully I won't get any
| hallucinated connections, right?
| syntaxing wrote:
| Wouldn't looking at the RAW give insight on what's happening here
| chrisseaton wrote:
| The iPhone Camera app doesn't support RAW except on some
| specific Pro models, and then you have to turn it on - you
| can't get RAW retroactively.
| xchaotic wrote:
| This is how it begins - SKYNET
| rbrbr wrote:
| sathishmanohar wrote:
| She is a ghost! Duh.
| ineedasername wrote:
| Maybe iPhones are now making aesthetic decisions? _" No, that
| person's face... Well, let's just cover it with a leaf."_
|
| It puts the censorship of Renaissance paintings & statues with
| figleafs over the naughty bits in a new perspective.
| dathinab wrote:
| Yes they have been doing aesthetic decisions since a while.
|
| What started out with "simple" image stabilizations, noise
| filtering etc. has long become a pipeline of "apply AI magic
| onto the image which makes it how people _think_ it should look
| " (instead of how it actually looks).
|
| Like making the sky much more bluer then it is.
|
| Or edges much sharper then anything such a camera could see (or
| sometimes especially in combination with digital zoom anything
| a human with sharp healthy eyes could see).
|
| And in case of image stabilization one thing you tend to turn
| is to take multiple pictures in a row and interpolate. Like
| some pictures with leafs "besides" the head and some with them
| behind the head. And then "magic" the head becomes the leaf.
| yuvalkarmi wrote:
| Plot twist: this person actually has a leaf for a head
| GoToRO wrote:
| Impossible to take photo of stars on iphone. Whatever sw they
| use, it will create random stars in the photo. Like instead of 1
| star you get ten.
| anonnyj wrote:
| My wife's phone produced some odd results. We went out shooting
| Christmas decorations around town at night... The next day when
| we were looking at them they appeared very much like daytime
| photos. WTH
| mr_toad wrote:
| The phone will take a lot of shots at different exposure levels
| and combine them. It lets you take shots at night without
| having lights overexposed and the background underexposed.
| bereasonable wrote:
| He posted an update. It was a leaf on a tree in front of the
| woman and it obscured her face.
|
| https://twitter.com/mitchcohen/status/1476951534160257026
| RicoElectrico wrote:
| Block matching is a somewhat popular noise removal technique
| predating neural networks.
|
| https://en.wikipedia.org/wiki/Block-matching_and_3D_filterin...
| danShumway wrote:
| Before we criticize, before jumping to conclusions -- I just want
| to point out, it is a _great_ leaf. Are we certain that the
| camera 's decision was wrong?
|
| I mean, maybe we should consider the camera's judgement about
| what the best picture is. I'm not a photographer, if my camera
| tells me that my friend's head is best represented as a
| combination of sticks, leaves, and other natural objects, who am
| I to argue with that? I haven't been to photography college.
|
| This is a darn good picture of a leaf person.
| lkois wrote:
| I also don't quite understand the leap to controversy, which
| seems to rest on the assumption that this person does not have
| a pile of leaves for a head.
| kingcharles wrote:
| Right. To me the photo was set up for the retweets and likes.
| They got their leaf-headed friend to stand in front of some
| leaves and took a photo. Probably didn't even use an iPhone.
| This might have been some Samsung psy-op to discredit the
| iPhone neural processor.
| yoav wrote:
| I think it's absurd to assume this guy has friends or that
| there are people with leaves for heads. These are clearly a
| couple of jackets stuffed with leaves. You don't see the
| hands or the other friend except a sleeve. This is the same
| ploy I did many years ago trying to convince the internet I
| had friends.
| Centmo wrote:
| Clearly, the ML algorithm had been given the goal of optimizing
| the happiness of the user. It had ascertained that the user was
| quite active on social media and had tied a significant amount
| of their self-worth to the number of likes received from posts
| and photos. It had correctly calculated that the user would
| extract more enjoyment from the photo by way of likes if their
| friend's face were replaced with an arrangement of foliage. The
| user clearly did not like that friend very much anyway due to
| the lack of engagement with their posts. Truly impressive
| technology.
| implying wrote:
| This reminds me of Huawei camera app detecting pictures of the
| moon and superimposing a clear stock photo into your picture:
| https://www.androidauthority.com/huawei-p30-pro-moon-mode-co...
| Traubenfuchs wrote:
| That's hilarious. The way AI is developing we are turned into
| children full of magical thinking like ,,I made a great
| moonshot with my amazing device!" when it's actually 99% AI
| handholding.
| therealdrag0 wrote:
| And I'm going to take thousands of pictures and never look at
| them.
| feupan wrote:
| Don't worry I can now sync them to music and have _other_
| humanoids look at them instead while bored out of their
| minds.
| Cerium wrote:
| "Cameras" making changes to the image like this make the
| discussion about the image processing pipeline during the
| Rittenhouse trial seem a little less bizarre.
| ___q wrote:
| Turns out the Camera app did NOT replace this person's head
| with a leaf
| https://twitter.com/mitchcohen/status/1476951534160257026
|
| So the discussion about the image processing pipeline during
| the Rittenhouse trial" is just as bizarre as it was before.
| dagmx wrote:
| Any technically savvy person should be able to differentiate
| different types of upscaling algorithms.
|
| The judge was arguing about enlarging an already recorded
| video, not about the merits of the original recording. This
| post is about image processing during the capture process.
|
| Conflating them is disingenuous, unless we're taking very large
| leaps of logic
| xdennis wrote:
| I don't think they're that different. In both cases the frame
| is already taken and the editing takes place milliseconds
| after or months after.
|
| Apple could make iPhones work like DSLRs and save both the
| edited image and the RAW sensor capture.
| dagmx wrote:
| No they're radically different in effect.
|
| One is the playback of data, the other is the capture of
| real world signals into data. They may use technologies in
| the same domains, but the implementation varies
| dramatically, as do the possibilities..
|
| If you have the recorded data, you can send it to any
| trusted playback device/software to get back a trusted
| scaling. You can workaround/bypass any distrust in a given
| players algorithms, and it's very easily discoverable
| whether something is applying processing or not. There's
| still the risk of intentionally faked videos, but the
| discussion is around real time processing introducing
| artifacts.
|
| With image capture though, there's no such thing as
| "truth". Even RAW data isn't truth. It's just less
| processing, but you can't escape it altogether. Even
| professional full frame cameras will do significant signal
| processing between the photosites and the recorded RAW
| image. The same goes for film.
|
| The only thing a court can do is put a strong guidelines
| for proving the honesty of the content. you can't disallow
| processed imagery because all images are processed the
| second they're recorded.
| max48 wrote:
| >Any technically savvy person should be able to differentiate
| different types of upscaling algorithms.
|
| Even the company who makes the software couldn't explain what
| was done to the picture when their "expert" was asked during
| the trial. There is a wide range of different methods that
| can be used to get more details out of a blurry pictures,
| including ML/AI-based algorithms that are indirectly getting
| extra details from other pictures.
| dagmx wrote:
| The device in question was an iPad, the company that made
| the software (Apple) was not involved, the "expert" was a
| third party explaining the standard enlargement methods.
|
| https://www.theverge.com/2021/11/12/22778801/kyle-
| rittenhous...
|
| If the judge mistrusted the enlargement method, he should
| have ordered them to display it on another device or
| software.
|
| Real-time video upscaling is very standard filtering that's
| not introducing extra hallucinated details. At most, some
| TVs use ML to tune their sharpening and color rendition,
| but it can always be disabled. The iPad has never been
| shown or proven to use those for video playback, and even
| if it did, the courts should have a standard video player
| to present details with standard filtering.
|
| The judges non-technical stance on things, isn't borne out
| of reality and again, any capture time post processing
| should be completely independently viewed from playback
| time processing.
| Philip-J-Fry wrote:
| It wasn't really bizarre in the grand scheme of things, but it
| was a bit of a reach in the context it was questioned in.
| bendbro wrote:
| It was not. Popular coverage cited the court as debating
| "iphone pinch and zoom," but in reality, discussion mostly
| focused on the actual forensic application that was used, and
| the effect the algorithm that application produced on the
| output photo. The judge displayed a good understanding of how
| an algorithm takes as input original pixels and produces new
| pixels from that. https://www.theverge.com/platform/amp/2021/
| 11/12/22778801/ky...
| tandymodel100 wrote:
| No, not really
| kahrl wrote:
| Yes, yes really. When real resolution is being substituted
| with the best guess of a completely closed source image
| processor, the court should be made aware of it.
| tandymodel100 wrote:
| This sounds like a weird rationalization for an absurd case
| of technical ignorance. Like when people defended nuking
| hurricanes or using UV lights as a Covid therapy.
| CamperBob2 wrote:
| Fact is, you have no idea what kind of unsolicited
| postprocessing the camera in the Rittenhouse trial might
| or might not have performed, and neither did the court.
|
| It's a huge potential problem, and getting worse by the
| day.
| [deleted]
| iszomer wrote:
| Agreed as we've been living in the era of deep fakes for
| a while. I shudder to think how computational photography
| can advance to such a degree to blur the context to any
| unsuspecting user, whether accidental or _intentional_.
| dagmx wrote:
| Except, specifically to the Rittenhouse trial, it was
| about playback processing NOT capture time processing.
|
| Capture time processing is also verifiable with regards
| to what stack a particular device uses with the use of
| metadata, and as such has little in the way of extra
| problems over other potential doctored evidence which
| have been possible for years without smart phone devices.
|
| Do we question what color film would portray an image for
| example? Is a particular lensing affecting the truth of
| an image? A specific crop? There's no such thing as a
| perfectly true photo or video.
| ALittleLight wrote:
| It's perfectly reasonable to ask what image processing is
| done to get an enhanced image.
| tagoregrtst wrote:
| Both sides have a right to question evidence.
|
| Defending the suitability of evidence is the duty of re
| council who introduced it.
|
| The prosecution couldn't explain how the iPhone zooms and
| it was on them to do so.
|
| Btw, the prosecution later on were caught tampering with
| evidence.
| Dah00n wrote:
| If a camera can replace a head with a leaf nothing taken
| with that camera can be trusted, especially in court,
| ever. Any changes to photos or videos should be avoided.
| This is the norm in court cases. You should read a proper
| article about it instead of the click baity ones.
| tandymodel100 wrote:
| This aged poorly
| https://news.ycombinator.com/item?id=29750660
| kahrl wrote:
| No it didn't. The point still stands that AI enhanced
| images have a credibility and admissibility problem. This
| one example turning out to not have been altered in the
| way we thought it was by the enhancer doesn't invalidate
| the broader questions brought forth by the discussion.
| webkike wrote:
| Kinda looks like a water droplet or something
| alkonaut wrote:
| Those of us who have been shooting large digital cameras for the
| past decade and are some times sad that our photos often come out
| unsharp in poor light compared to smartphones can at least take
| some joy in this "no free lunch" demonstration.
|
| If this is due to stabilization and not some background blur face
| detection then it's probably _not_ something you can (or would
| want to) disable. Taking a telephoto shot with a tiny sensor in
| something other than great light (even a heavy overcast is often
| not enough) will require a _lot_ of software processing. I'm not
| sure exactly what happened here but I'm pretty sure everyone
| asking for "unmodified raw photos" to be produced don't
| understand what they are asking for. Those "unmodified" photos
| would be unusable in most cases outside very bright conditions.
| johnisgood wrote:
| I am not sure what you are trying to say here. They definitely
| did not ask for their face to be replaced by leaves either. Is
| it either unmodified, or leaves? Is not there a middle ground?
| KarlKemp wrote:
| This is the "smart HDR" option on an iPhone, described as "In
| difficult lighting, the best parts of multiple exposures are
| combined into a single image".
|
| So there is the middle ground of disabling this. Or,
| alternatively, just not caring about such an error once in a
| million shots.
|
| (as an aside, I'm pretty sure the structure of the leaves is
| responsible for this error, as it's an area with usually many
| strong edges in a somewhat repetitive pattern. That invites
| misalignments.
| johnisgood wrote:
| Yeah. If I tap "Selfie" on my stock Android 11, I can see
| my face being smoothed out. I can tell that it is modified.
| If I have an issue with it, I can disable it. :)
| alkonaut wrote:
| They asked for their phone to make a best effort to stack
| many pictures into one, cleverly aligning objects as well as
| possible.
|
| And the camera failed. The alternatives would have been an
| extremely dark picture of a person without a leaf-face, or an
| extremely blurry picture of a person without a leaf-face.
| johnisgood wrote:
| > The alternatives would have been an extremely dark
| picture of a person without a leaf-face, or an extremely
| blurry picture of a person without a leaf-face.
|
| If that truly is the case, then.. woah. It definitely
| failed.
| andrewflnr wrote:
| But it would be nice to be able to re-run that software
| processing later if you can see it has done something silly,
| right? That at least seems like a valid use for "unmodified"
| image data.
| Bilal_io wrote:
| I believe the Google Camera does this. Not sure if it
| preserves the original photo before any color corrections,
| but it allows me to go back and added, remove or change any
| blur.
| Gigachad wrote:
| No free lunch, but an incredibly good value lunch. The quality
| that comes out of phone cameras is remarkable. And the times it
| messes up are so rare that it becomes a talking point worthy of
| hundreds of comments.
| jhgb wrote:
| > No free lunch, but an incredibly good value lunch
|
| As in, "Incredibly-cheap-but-sometimes-there's-rat-poop-in-
| it" lunch?
| evanextreme wrote:
| no as in, "computational photography has been a staple of
| every modern smartphone sold globally for the past 4 years,
| and this is one of the only examples of problems ever
| happening for anyone" lunch
| Schroedingersat wrote:
| Problems of changing skin texture or text or body shape
| or background details abound, but are normally of a form
| where it's easier to dismiss or gaslight anyone pointing
| them out.
| jhgb wrote:
| > one of the only examples of problems ever happening for
| anyone
|
| Assuming you don't have a sampling error here, of course.
| GenerocUsername wrote:
| You guys are conflating 'adjusting light levels and sharness'
| with 'replacing whole objects in one area of scene with other
| objects from the scene'
|
| I think this particular big demonstrates a massive gap in
| what people think photoprocessing is, and what it has very
| recently become
| judge2020 wrote:
| Simply adjusting light levels and sharpness wouldn't
| produce this good/clear of a photo in the conditions
| presented - AI/ML image post-processing is a hard
| requirement for sensors like these.
| spoonjim wrote:
| But it's also just weird. That's not my daughter's skin
| in the photo, it's 1000 other people's skin blended
| together and textured in.
| vlunkr wrote:
| Ceci n'est pas une pipe
| foota wrote:
| That's not necessarily true, I don't know the specifics
| of how it's implemented but it could just be used for
| select pixels from different frames in the shot?
| Gigachad wrote:
| It never has been accurate. Even on dumb traditional
| cameras, colors are more of an interpretation than a
| reality.
| johnmaguire wrote:
| I don't know about this - RAW files are recordings of
| sensor values. Those sensor values are accurate of what
| light the sensor measured.
|
| Then the sensor values are converted to a JPEG. So it's
| still an accurate rendition of the light - even though
| yes, it is a _rendition_ of the light.
|
| But to completely replace some sensor values with some
| computer-generated values is a different ballgame, IMO.
| It's more akin to Photoshop editing, as opposed to
| Lightroom.
| AuryGlenz wrote:
| I agree with your point, but just to be clear for people:
|
| When you bring that RAW photo into something like
| Lightroom or Capture One they're automatically applying a
| base curve to the photo before you do anything.
|
| In Capture One you can set that to "flat" which I believe
| is fairly unprocessed, and it takes a lot of work to get
| it to a usable state from there. They also have other
| options, and they recently changed their default setting
| and it's pretty incredible how different it is from their
| old default.
| tsimionescu wrote:
| The least that modern phone cameras do is to blend
| multiple RAW files into a single picture, to improve
| various metrics. That brings the risk of producing
| results like the one seen here.
| georgyo wrote:
| This is really splitting hairs, but even our eyes are
| interpretation of reality.
|
| The camera sensor does not record the wavelengths of
| lights it is capturing, only RGB values. We can only
| reproduce the picture in a way that is convincing to our
| eyes, not what light what original being captured.
| johnmaguire wrote:
| Yes, but still, as another poster put it, it's the
| difference of changing the lighting versus changing the
| texture pack...
| JKCalhoun wrote:
| There definitely though is some magic-sauce when de-
| Bayering [1] the RAW data and then playing games with
| color spaces and color profiles to end up with that final
| JPEG.
|
| I agree with your point though. I dislike "computational
| photography".
|
| [1] guess it is called _demosaicing_
|
| https://en.wikipedia.org/wiki/Demosaicing
| ClumsyPilot wrote:
| There is a difference between getting the colors
| subjectively wrong and replacing the entire texture of
| the material
| stinos wrote:
| They surely are. But not all interpretations are alike. I
| was recenty looking at (scans of) analog pictures I took
| years ago using an entry level analog camera and apart
| from white balance being off, the general skin tone +
| texture and shadows at least looks very realistic and not
| like some cardboard version of skin.
| calciphus wrote:
| I agree - it is getting harder and harder to fully turn
| off skin smoothing on cellphones and webcam software. You
| can turn it down, but rarely _off_.
| [deleted]
| laurent92 wrote:
| It's _what you desire your skin to be_.
| spoonjim wrote:
| That might be good for myself, or if I'm honest even my
| wife, but my daughter looks perfect to me and I only want
| to see her in reality.
| kevin_thibedeau wrote:
| No imaging system reproduces reality. You're always
| getting a compromised result.
| spoonjim wrote:
| Yes but the knowledge of how it was created affects how I
| see it. If it's just denoising etc it feels different
| than if I know it's painted in some other data.
| johnmaguire wrote:
| Sure, but altering the hue/saturation/lightness of colors
| is a little different than replacing a bunch of pixels to
| achieve a "prettier" result.
| wholinator2 wrote:
| Yeah, even your eyes don't reproduce reality perfectly
| but at that point it's just semantics. He means he wants
| to see his daughter the same way through the camera that
| he sees her through his eyes, in real life, otherwise
| known to him as "reality".
|
| I don't think it's unreasonable to allow context of the
| statement to allow us to disregard "reality" as it
| pertains to quantum wave functions, in favor of something
| more human. There's a large difference in something
| that's goal is to capture what the eye sees and something
| that's goal isn't. It feels like Apple thinks they know
| what's better for us than we do, which I admit it's
| perfectly capable of doing in certain scenarios. But,
| when Apples thoughts do not align or go directly against
| our wishes it's uncomfortable, it feels like your
| "reality" is being ripped from your hands in favor of
| what some giant corporation thinks your reality should
| be, for any large number of opaque or intentionally
| obscured reasons.
| kingcharles wrote:
| The amount of post-processing your brain does to make you
| believe you see far more than you actually do in far
| higher resolution than you do, whilst combining two 2D
| views into a pseudo-3D view, is incredible.
| kadoban wrote:
| There's straight-up blank spots in our raw vision, we
| don't see ~any color at the edges, etc., and that's just
| the _start_ of what our eyes/brains elide out. Really
| crazy stuff.
| Dylan16807 wrote:
| That doesn't mean I want a camera to do it. With a bigger
| sensor I can get a nice crisp image, and that should be
| the target for what a phone makes.
| shawnz wrote:
| As far as I understand the goal of using ML augmentation
| in camera phones is to capture what the eye sees. It's to
| compensate for the limitations of the hardware which on
| its own is not able to produce a true-to-eye result. You
| seem to be implying that the goal is to improve the photo
| to be better than reality but I don't think that's the
| case.
| kelnos wrote:
| Right, but it can only _guess_ at what the eye sees when
| hardware limitations don 't allow it to capture enough
| information. Maybe most of the time it guesses right, but
| it's still a guess, and it appears sometimes it guesses
| wrong. _Really_ wrong.
| shawnz wrote:
| If it's combining information from multiple shots or
| multiple camera sensors, isn't that more like sensor
| fusion than "guessing"? I think calling it guessing is an
| uncharitable interpretation of what's happening.
| ClumsyPilot wrote:
| > You seem to be implying that the goal is to improve the
| photo to be better than reality but I don't think that's
| the case.
|
| I think it is, Afterall that's what all the Instagram
| filters are for, and that's where most photos end up
| mynameisvlad wrote:
| That's not even remotely close to AI/ML processing, and
| also is something you have been able to accomplish--
| manually or with presets-- in Lightroom for ages.
| ClumsyPilot wrote:
| I mean to illustrate the attitude of a typical user, not
| technology
| fao_ wrote:
| Yeesh. I can't be the only person for whom this sentence
| makes my skin crawl.
| numpad0 wrote:
| And I can't be the only person who feels modern life is
| like perpetually flying through Miyazaki Hayao tentacles
| and making out the best of it.
| fao_ wrote:
| Do you mean Junji Ito?
| be_nice wrote:
| Good thing that won't show up in photos!
| creato wrote:
| The "hard requirement" is just multi-frame noise
| reduction, which has been around for a lot longer than
| the AI/ML hype wave, and has always had a risk of
| producing similar artifacts to this one.
| Retric wrote:
| Multi-frame noise reduction only gets you so far.
| Rotating objects for example present huge issues and
| result in their own artifacts. In the end there isn't a
| free lunch, any system that improves resulting images is
| making trade offs.
| jjeaff wrote:
| Ya, I don't think anyone has been using ml/ai in camera
| phones up until the last 5 years maybe.
|
| But it does seem like they have quite a few high tech
| algorithms at play. More than just multi-frame noise
| reduction.
| aaron695 wrote:
| [deleted]
| dheera wrote:
| It probably all goes into a black box ML model nowadays, so
| the model will conflate it if we don't.
| max48 wrote:
| >The quality that comes out of phone cameras is remarkable
|
| It's more than that, it's completely mind blowing when you
| compare it to a DSLR.
|
| In "good" lightning condition, your iphone will give you a
| picture 75% as good as what you could get with a 4k$
| fullframe DSLR kit that's 10" long and weight a few pounds.
|
| the problem is when light is not perfect, that's when the
| bigger lens/sensor are worth it even for a beginner that
| doesn't know much about photography. And if you need to edit
| your pictures, you don't get those extra stops of exposure up
| or down because the iphone already needed all the dynamic
| range of the sensor to create the picture you got.
|
| It would be very interesting to see a colab between apple and
| a DSLR company to get the best of both world. A large FF
| senor and lens ecosystem like Canon combined with whatever
| dark magic Apple is doing on their tiny sensor would have
| massive potential.
| tsimionescu wrote:
| > the problem is when light is not perfect, that's when the
| bigger lens/sensor are worth it even for a beginner that
| doesn't know much about photography.
|
| I think the opposite is true, from my (amateur) experience.
| It's much harder to get a good photo in poor light with a
| dedicated camera if you don't know what you're doing than
| it is to get a good photo with a smartphone.
|
| In good light, the better sensors shine through, but in
| poor light (e.g. and overcast day, not talking about some
| limit of darkness condition) the superior processing and
| auto-adjustment of a top-end phone will make for much
| better photos. Again, talking exclusively about amateur
| photography, not what a master can do.
| saiya-jin wrote:
| Yes and no - even latest iphone 13 pro / nexus 6 cameras
| will produce shots that are blurry in shadows due to
| aggressive noise reduction or let some fugly color noise
| pass through the alghoritms. You just need to open any
| night photo on bigger computer screen instead of just
| phone.
|
| You have much much better starting point with a full
| frame.
|
| Of course if you compare a clueless FF user with clueless
| phone user, phone can win but thats an unfair comparison.
| You don't invest often 5k into photo equipment and then
| be oblivious of what options it gives you. And even if
| you don't actively try to improve yourself, just using
| the camera will get you there (somewhere) eventually. Its
| not a rocket science, just keep doing it. That's not a
| "master" level, more like experienced beginner.
|
| And even in the case of ignorant users, all cameras these
| days have auto setting which is actually pretty good and
| you can take final jpegs from it, ignoring the power of
| raw edit completely. Counter-intuitively, holding a
| bigger camera system steady for a good shot is easier
| commpared to lightweight, awkwardly-shaped phone.
|
| That all being said, I will invest into some top-end
| phone next year mainly due to its camera power these days
| and convenience of always having it with you, and sharing
| kids photos with family instantly. No more always lugging
| around 2.5 kg of Nikon D750 with 24-120 lens and good
| bag. I will _not_ make technically better pictures with
| it, but in my case other aspects outweight this.
| eezurr wrote:
| Snapping good low light photos really comes down to how
| fast your lens is (smaller f stop number). A lot of kit
| lens (the ones sold with cameras, even nice cameras), are
| surprisingly low quality lens, and don't really open up
| much. My Sony A7-3 came with a kit lens that opens up to
| F3.5.
|
| This summer I purchased the 24mm GM F1.4 and WOW does it
| take fantastic night photos. No tripod needed, it lets in
| enough light that I can use 1/30+ shutter speed.
| https://i.imgur.com/CcNEUsM.jpg (photo I took recently.
| 1/40s, 1.4F, 1250 ISO. No photoshop magic, just basic
| lightroom adjustments)
|
| Also, the iphone opens up to 1.4f, so you're probably not
| making a fair comparison to your camera (assuming your
| lens does open up as much)
|
| EDIT: Sony has a 1.2F 50mm lens I really want to get my
| hands on, but now I am an entrepreneur so my spending
| days are over for awhile.
| emkoemko wrote:
| f1.4 on a tiny sensor does not equal f1.4 on a FF
| sensor.... that's why they have to use "AI" to fake the
| bokeh. Yea a f1.2 might be nice but you would have to be
| really skilled to get stuff in focus wit that razor thin
| dof.
| elif wrote:
| That depends entirely on what you consider "messing up"
|
| if it had put that leaf over some other leaves in the
| background that weren't captured well, would we have noticed?
| would it be a better photo?
|
| What if it was meant to be a beautiful leaf photo? If you
| want an accurate representation of reality, the moment, etc,
| the majority of cellphone photos would be considered "messed
| up" imo
| KarlKemp wrote:
| Indeed. I wish I had a list of everyone parroting the wisdom
| that "it's all about the size of the lens, a smartphone can
| never be as good as a DSLR" so as to ignore their future
| opinions. Camera phones now beat cameras worth thousands of
| dollars with better software and larger R&D budgets, at least
| in low light performance and color gamut.
|
| Closely related: "It's impossible to just 'enhance' an image
| to make a license plate readable. Lost information ist lost".
| This was big due to some US TV show at the time, I guess.
| Here, the error was in the assumption that the information
| was lost: as it turns out, there's a wide space of
| possibilities for image quality to be too low for us to
| easily read some letters, but good enough to contain the
| clues allowing a smart algorithm to reconstruct them.
| toss1 wrote:
| Yes, this is getting better, but it reminds me of a story
| from a friend who was doing work on some classified
| satellite imaging systems (he was in hardware, iirc) and
| went to an internal seminar/briefing by some experts on
| how, from the front to the end of the system, to get the
| best images for the customers.
|
| He said they went through hours of details on how the
| various enhancement algorithms and systems worked, and at
| the end, the bottom line was basically 'take a better
| picture in the first place' - as in get the lens, lighting,
| and parameters right in the first place, and that'll be the
| biggest factor in how far the processing HW/SW/people can
| take the enhancements.
|
| Maybe the new software with huge R&D budgets is now better,
| but I'd suspect this advice is still not obsolete...
| sundvor wrote:
| I saw your post got a down vote, however for eg any given
| webcam, the best way to improve your video for your
| mmeetings is likely not to upgrade the camera but to
| invest in better lighting (eg a couple of key lights) and
| optimise your visual setup (framing).
|
| It is good advice indeed.
|
| Still, it's amazing how processing has evolved. I took a
| picture of my 3yo daughter having finally fallen asleep
| after a bout of night terrors, in the dark light (using a
| Hue Ambiance in her room; best things ever) - in the S21
| Ultra viewfinder I could see absolutely nothing, however
| after 3 seconds of a steady hand I was rewarded with a
| fantastic picture better lit than what my eyes had
| adjusted to.
|
| (If I had any complaints, it would be that I'd like it to
| be darker - however I can understand why on average most
| would like the processed result.)
| toss1 wrote:
| I definitely agree with the lighting bit, and I think
| that was indeed included in the category of "first, take
| a better picture". So yes, focus, exposure, no motion
| blur, etc...
| laurent92 wrote:
| > It's impossible to just 'enhance' an image to make a
| license plate readable.
|
| Well, a phone can reconstruct this license plate, and in
| fact _any_ license plate in lieu of this one! And invent a
| new one!
|
| Which is worrisome for justice. Presenting proof, that was
| uploaded to servers, horodated and geostamped, won't be
| enough in a court of law. Anyone can answer: "What if the
| iPhone recreated and inferred the presence of a knife using
| the neural engine? This face, is it real? The distance, is
| it repositionned?"
| ClumsyPilot wrote:
| > This face, is it real? The distance, is it
| repositionned?
|
| Deepfakes, robocalls and Caller ID and voice
| impersonation, automatic ML in photos, if this train
| keeps going, we won't know what's real at all. I am
| struggling to see how we will function.
| pishpash wrote:
| You will need a transcript of processing and intermediate
| results.
| ClumsyPilot wrote:
| But how do you know a particular transcript is not fake?
| nimbleal wrote:
| > we won't know what's real at all. I am struggling to
| see how we will function
|
| That's where we started, in a way. Perhaps our short time
| of feeling we had a grip on reality will just have been a
| passing phase, before returning to normality.
| ruined wrote:
| the difference is, before media, everyone understood that
| if they didn't experience something directly, they had to
| trust someone who spoke, and that speech was always
| understood to be an interpretation and possibly fiction.
|
| but now we have all these artificial experiences that
| might clock real to the senses and have a reputation of
| fidelity, yet have always been editable and are
| increasingly made of black box constructive
| interpretations that value who-knows-what over fidelity.
| and no part of the process can be interrogated for
| motivations or detail.
| hughrr wrote:
| That has always been the case for photographs from every
| device that has ever been capable of recording images.
| sodality2 wrote:
| Film camera?
| hughrr wrote:
| Yes especially film cameras.
| jeffreygoesto wrote:
| Nope. The image data _is_ actually lost. What happens is
| that it gets pimped with what on average was lost when a
| training process artificially degraded a training set. For
| the results to look(sic) natural, you need both the
| training set and the degradation to be close to what you're
| shooting. If that is not the case, hallucination happens.
| IgorPartola wrote:
| In theory what prevents me from using a huge lens and
| sensor on a DSLR or a mirrorless camera and then using the
| same advanced software but now running on a desktop
| computer or a server that is many times more powerful than
| a smartphone to do advanced post processing?
| 6gvONxR4sf7o wrote:
| The iphone's depth sensor is one factor. I don't know of
| any lidar integrated cameras that aren't phones, for
| example.
|
| Also, because phone cameras have gotten so good, the
| skill floor for what you're talking about is higher.
| There's more skill involved in using software like
| lightroom than you might think. I'm a hobbyist
| photographer, and sometimes my iphone just seems more
| skillful than I am using my nicer camera and Lightroom.
| alkonaut wrote:
| Nothing at all. It's honestly surprising that it's not a
| bigger thing.
|
| Guessing what stops you is that the Apple team that makes
| camera software has a budget that makes the Nikon pc app
| team look quite small.
| mynameisvlad wrote:
| Cameras don't have DoF sensors which I believe the phone
| uses in its processing to know if there's a subject in
| frame.
| emkoemko wrote:
| they have that dof sensor so they can emulate dof, we get
| real dof.
| KarlKemp wrote:
| It's important to note that recent iPhones contain the
| same M1 chip as last year's MacBooks. They are not
| limited by processing power.
|
| DSLR and mirrorless cameras have better optical
| properties. But they are held back by starting too late
| and being much worse when it comes to software. These
| cameras also _are_ limited by their CPUs, which are a
| decade behind, and by their batteries, which are much
| smaller but expected to last for thousands of exposures.
| perardi wrote:
| I had to check that, as these big chunky camera batteries
| "feel" like they should have a lot of mAH compared to a
| phone.
|
| And I am wrong. Depending on the phone.
|
| A Nikon battery: 2280 mAh.
|
| An iPhone 13: anywhere from 2406-4352 mAH.
|
| Yeah, that's a way smaller power budget than a Pro
| iPhone, especially as you have autofocus and image
| stabilization systems that have to move around a lot more
| mass.
| nucleardog wrote:
| Keep in mind that my Nikon really only consumes power
| when it's _doing_ something. I can leave my camera "on"
| for months at a time and come back to a fully charged
| battery.
|
| The screen is off except when I'm previewing a photo. The
| viewfinder is physical. The metering is only on for 10-20
| seconds after I wake the camera by touching the shutter
| release and is only displaying on a low power
| monochromatic LCD. Autofocus only happens while I'm
| holding the shutter release until it finds focus then
| locks.
|
| Meanwhile the iPhone battery is powering what is,
| effectively, an entire laptop with a 5+" screen and
| cellular, wifi and bluetooth radios.
|
| Articles I can find are quoting an iPhone taking a few
| hundred shots on a charge. Nikon's testing puts my camera
| able to take a few _thousand_ per charge. There's
| literally an order of magnitude difference from my
| Nikon's 1400mAh battery to a iPhone's 2500-4500mAh
| battery. Two to three times the capacity for 1/10 the
| pictures.
| AuryGlenz wrote:
| That comparison doesn't work for newer mirrorless
| cameras. I have a DSLR and mirrorless strapped to my body
| at each wedding and I go through more than twice the
| amount of batteries in the mirrorless.
|
| They also take a second or so to wake from standby which
| is super annoying for my job, so I have it set to 5
| minutes. That essentially means it never goes in to
| standby on a wedding day.
| Zak wrote:
| It is not accurate to compare the capacity in Amp-hours
| because the Nikon EN-EL15C battery has twice the voltage
| (two Li-ion cells in series) of a single-cell smartphone
| battery.
| numpad0 wrote:
| Napkin math: 7V/2280mAh = 16Wh or equivalent to
| 3.7V/4300mAh (USB power banks are always measured and
| marked in 3.7V and mAh as unit to produce largest
| comparable numbers)
| Zak wrote:
| I've always thought it was odd powerbanks didn't
| advertise in mWh. That's a bigger number and comparable
| regardless of voltage.
| Dylan16807 wrote:
| If you want napkin math then just multiply 2280 by two.
| It'll be faster and more accurate than converting to watt
| hours.
| wholinator2 wrote:
| As well I'd like to add the thought that the smart phone
| could be running all kinds of battery draining things in
| the background. So actual battery life depends not only
| mAh but how efficiently the device uses its capacity.
| SAI_Peregrinus wrote:
| And the camera batteries are replaceable. Quickly. In the
| field. There are also "battery grips" that have extra
| battery slots in them and automatically switch between
| the batteries. The camera never has to sit on a charger,
| only a spare battery does.
| johnmaguire wrote:
| Alternatively: I'd point out that the camera app tends to
| be a highly battery-draining activity on a phone. Most
| phone users do not leave the camera app up 24/7.
|
| (And as a digital camera user - at least I can hot swap
| batteries. I miss my old cell phones that allowed for
| this.)
| emkoemko wrote:
| held back? why on earth would anyone want the camera to
| do more then just give you a raw image file back? if you
| don't want to be creative then yea i guess go with a
| phone and let it make all the choices for you.
| judge2020 wrote:
| > recent iPhones contain the same M1 chip as last year's
| MacBooks
|
| I mean, sure, but the A15 bionic is much more power
| constrained and effectively has less 'GPUs' and cores
| than the M1, likely because the SoC is also constrained
| by its size within the phone housing.
| user_7832 wrote:
| > It's important to note that recent iPhones contain the
| same M1 chip as last year's MacBooks. They are not
| limited by processing power.
|
| Correct me if I'm wrong but there's still a huge issue
| with silicon processing capabilities/power when it comes
| to image sensors (which is why camera sensors buffer
| images between shots). I unfortunately don't remember the
| context but it said something that it was (very far) from
| possible to actually get all the sensor data into the
| processor and as a result you only take a small fraction
| of the actual incident light/photos practically.
| sciencesama wrote:
| iphones come with a15 not m1
| hug wrote:
| This isn't true, really: there are massive amounts of
| data being pushed about in a modern camera, through
| custom image pipelines that will do many gigabytes per
| second of throughout, but none of those are really
| bottlenecks for the sensor data per se.
|
| Most of it ends up in a RAM-like write buffer, because SD
| cards, or even faster formats like CF-express, can't keep
| up with the write speed of tens of 50 megapixel shots
| coming through every second.
|
| There _are_ sensor readout speed limits, which is why you
| don't see cameras exceed 30 frames per second of 8k
| recording, but there's no reason why you couldn't read
| out the entire full-well capacity of the sensor each of
| those frames.
| floatboth wrote:
| But that CPU shouldn't really do anything other than
| shoveling raw sensor data to flash as fast as possible.
| All the processing can happen on your workstation (or an
| iPad or whatever).
| numpad0 wrote:
| You don't want to shovel 12MP 14-bit-effective multi-
| frame raw pixel data along its metadata into the flash...
| Dylan16807 wrote:
| Sure I do. Or very minimally compressed.
|
| Hell, one of the major selling points of the most recent
| iphones is prores recording! Dump half a second of that
| into a file. If that means a half-full phone can only fit
| a thousand photos before it's emptied, then sure I'll
| take it.
| hughrr wrote:
| I've got nearly the same CPU in my iPad, MacBook and
| iPhone. I can do the processing on any device...
| wtallis wrote:
| You don't have the same heatsink on all three, and not
| necessarily the same number of cores.
| sundvor wrote:
| This.
|
| My Corsair H150i Pro AIO water cooler and decent airflow
| case (Define 7 W/Corsair ML140s) allows me to run my
| Ryzen 3900XT at 4.4ghz constantly, all cores, for any
| kind of real life workload.
|
| Good luck trying that with a laptop.
| Tepix wrote:
| The M1-Max at 30W seems to have tge same Geekbench 5
| multicore benchmark score as the Ryzen 3900XT at 105W.
| Think about that. Also a better single core score. You
| won't be using all cores all the time.
| hughrr wrote:
| All the image processing is short bursts so this is
| mostly irrelevant.
| jimnotgym wrote:
| Isn't that what Lightroom is for?
| TedShiller wrote:
| I use my iPhone when I want convenience. I use my DSLR when
| I want high quality photos.
| TedShiller wrote:
| No free lunch is right. iPhone photos look incredible on
| iPhone screens. They look terrible on my high resolution
| desktop screen. Instagram kids don't know this, but that's
| ok, because we don't need them to, and neither do they.
| hansel_der wrote:
| > iPhone photos look incredible on iPhone screens. They
| look terrible on my high resolution desktop screen
|
| yea, whith our first child we still used a dedicated camera
| (compact, no dslr) and there is a very noticeable drop in
| imagequality after that because we got lazy and use our
| phones.
|
| then again "the best camera is the one you have with you"
| trompetenaccoun wrote:
| Increasingly these photo"graphs" have nothing to do with
| reality either. I may be part of a weird minority but when
| I take pictures I do it to document things, I don't want
| them to be all fake and wrong. Some Chinese manufacturers
| have taken this to a ridiculous extreme, they make normal
| friendly faces look outright scary.
|
| Pictures shouldn't be edited by default, the user should be
| given the option if they want to. And lets not even get
| started on the fact that we have all these face recognition
| algos and such in a device constantly connected to the
| internet, with people taking pictures of themselves and
| everyone around them. What could go wrong...
| 6gvONxR4sf7o wrote:
| > when I take pictures I do it to document things, I
| don't want them to be all fake and wrong
|
| > Pictures shouldn't be edited by default
|
| I don't think you can have both of these things. In
| general, if you want to closely reproduce what your eye
| sees, you're going to have to do some editing.
| aembleton wrote:
| With a lot of phones you can have it save the raw image
| file so that you can convert it to jpeg yourself.
| Zak wrote:
| > _Pictures shouldn 't be edited by default_
|
| This statement requires more precision. A camera sensor
| usually has more dynamic range than the display can
| represent. Lenses often introduce distortions. Sensors
| capture noise. The tint and color temperature of light
| sources vary greatly. Here's a set of seven images taken
| at steps along the path starting from as close as a JPEG
| can represent to the raw sensor data to a finished image
| that reasonably represents how my eyes saw the scene:
|
| https://imgur.com/a/4paCKaL
|
| When using a dedicated camera and generating a JPEG in
| the camera, a similar set of steps is applied
| automatically. There's no such thing as "no filter" in
| digital photography; even the "unprocessed RAW" is one
| program's opinion of how 12 bits per channel should be
| rendered at 8 bits per channel to display on your screen
| (as well as downsampled and compressed, in this case).
| There are often user-selectable profiles that each have a
| bit of a different look, much as different film stocks
| produce different looks (Fuji cameras actually call their
| profiles "film simulations" and name them after the
| company's film stocks).
|
| So I think what you really mean is that you want the
| camera to produce an image that appears on a screen as
| much like what you saw with your eyes as it can.
| caslon wrote:
| You can set Apple's devices to shoot RAW, and it's usually
| pretty great, without the painting-like qualities of its
| default image processing.
| mr_toad wrote:
| RAW just means uncompressed, unlike JPEG or HEIF. You can
| still manipulate a RAW image.
| ryeights wrote:
| >RAW just means uncompressed
|
| This is not accurate.
|
| https://en.wikipedia.org/wiki/Raw_image_format
| kortex wrote:
| A) I'm not sure what nit you are picking, RAW formats are
| typically not compressed, and if they are (I haven't
| encountered any personally) they would be losslessly
| compressed. They certainly aren't quantized.
|
| Edit: ok here's some lossless and lossy compression of
| RAW. IMHO, "lossy compressed raw" is an oxymoron.
| https://photographylife.com/compressed-vs-uncompressed-
| vs-lo...
|
| B) the context is whether a raw photo can be
| "manipulated" in any way. The answer is a resounding yes.
| See: Darpa MediFor project. Search for work by Siwei Lyu
| (I'm on mobile and don't feel like digging through the
| links, there's tons of published work out there)
| sudosysgen wrote:
| You actually can't. The RAW an iPhone will give you is
| highly, highly processed.
| alkonaut wrote:
| How raw is "raw"? If you take a long exposure in a dark room
| with a recent iPhone you can see the long exposure shake
| being removed and the picture coming out a lot sharper than
| it should. Would a "raw" version of that photo be a
| traditional long exposure or would it be the clever stacked
| image but with less post sharpening etc? Or is the raw even a
| short video sequence? (that would actually make the most
| sense)
|
| I have an iPhone11 but not sure that has the raw option.
| wrboyce wrote:
| It wouldn't make any difference in this instance, all the
| computational stuff still happens.
|
| I just took a long exposure on my 12 Pro and purposely
| moved the camera around a bit and my phone produced a sharp
| RAW image.
| nullifidian wrote:
| That's the definition of a non-RAW image.
| Zak wrote:
| Recent iPhones (and many Android phones) have optical image
| stabilization; an element in the lens moves to compensate
| if the phone shakes during the exposure. Very new iPhones
| (and some Android devices) also have sensor-shift image
| stabilization, which moves the image sensor.
|
| These features are also available on many dedicated cameras
| and interchangeable camera lenses.
| liversage wrote:
| I don't have an iPhone but I've always been shooting RAW on
| my phones and then processed the photos in LightRoom. As
| soon as I use one of the "lenses" on my phone (like "night
| shot" or "panorama" or even "wide angle"/"tele") I only get
| a JPG. The RAW file is only created when I shoot using the
| basic camera. This is the case for my current OnePlus but
| also previous phones (Google, Nokia).
| kalleboo wrote:
| If you use a third-party app like Camera+ to shoot RAW,
| then it is the raw, completely unprocessed sensor data as a
| DNG file, like any DSLR. Shooting RAW on the ultra-wide
| camera in a dark environment results in a completely
| unusable image.
|
| If you use the iOS camera on an iPhone 12 or newer and
| enable the "ProRAW" option (which is shown as just "RAW" in
| the camera), you get a processed image, but with the data
| used as a base for the processing intact for re-processing
| when you edit it.
| mc32 wrote:
| Apple possibly caught a break here.
|
| It could have been worse and people would probably have accused
| Apple of being bigoted or something else.
| KarlKemp wrote:
| Or, maybe, knowing that releasing software that turns out to
| deliver markedly lower quality results for people already
| weary from centuries of marginalization would open them up to
| such accusations, they didn't get lucky so much as their are
| enjoying the benefit of having listened to such criticism,
| testing their software across a set of data as diverse as
| their customers, instead of just that set of photos from
| D'Brickshaw Ferguson's and Abigail Cumberbatch's wedding.
| mc32 wrote:
| They still caught a break.
|
| Here people are like "lol Apple" and understand it's the
| fault of the software and there isn't incipient malice
| intended. Whereas on the other hand people are apt to make
| it into something it isn't.
| d3ad1ysp0rk wrote:
| Yes I'm more likely to let Apple off for not training
| enough "people with a leafy backdrop" than "people with
| different colored skin" into their ML.
| mc32 wrote:
| Zoom showed lots of glitches for all kinds of people
| -when using "backgrounds", but "Twitter people" latched
| on to certain outcomes --and Zoom is heavily developed
| overseas.
| avrionov wrote:
| There is a big difference between color correction or exposure
| compensation and adding non existing objects by ML algorithm.
| alkonaut wrote:
| Modern phones take long exposures and try to make sense of it
| by analyzing it.
|
| Basically, a one second really dark video can be summed up
| into one very blurry but bright enough photo. But if you
| carefully move each frame to compensate for camera movement
| when summing it up - you might also get a sharp picture. But
| then the subject moves, or a leaf moves, and the algorithm
| has to decide _which_ parts of the photo should be a priority
| for not being blurry. And that would be faces, usually. So
| the camera now needs to decide what is a face and what isn't.
| Just to take one single telephoto picture.
|
| Only doing "in camera processing" like a 2015 digital camera
| with some sharpening, color curves etc doesn't cut it
| anymore. If your phone does that, it'll be laughed at. Phones
| these days need to do clever long exposure stacking with
| image recognition and all.
| flemhans wrote:
| Does anything like that exist as a computer-attachable cam,
| e.g. for Linux? Or are we stuck with <2015 cameras unless
| they come as part of a smartphone?
| sudosysgen wrote:
| There is software to do that, yes. Many cameras are able
| to do smart multiframe stacking, even fairly old ones.
|
| It's just that photographers would rather not have to
| rely on this, because it has a lot of issues.
| usrusr wrote:
| Wouldn't they even move different parts of the scene
| differently for summing up? And the summing is for
| denoising, not for rising levels out of darkness, so it
| would be perfectly fine to sum only the parts where there
| is plenty of confidence about the movement, leaving others
| noisy (or, if more noisy than one would like to admit, de-
| noise spatially instead of temporally in those places,
| which brings us back to blurry. Add an "unsharp mask" step
| to compensate and you get that watercolors look we all
| know).
|
| Back to moving different parts differently: this seems very
| similar to the motion prediction parts of video encoding. I
| wonder if and how much the stacking algorithms make direct
| use of elements of the video encoding implementations those
| cameras also have?
| 1123581321 wrote:
| Short-lived joy, sadly. :( It turns out the leaf was actually
| in the foreground and correctly captured.
| https://news.ycombinator.com/item?id=29749950
| alkonaut wrote:
| I liked the original story arc better!
|
| "It's amazing how well these work if just one screwup makes
| headlines"
|
| Also true for zero screwups I suppose
| formerly_proven wrote:
| > Those of us who have been shooting large digital cameras for
| the past decade and are some times sad that our photos often
| come out unsharp in poor light compared to smartphones can at
| least take some joy in this "no free lunch" demonstration.
|
| Can't replace sensor area with anything other than more sensor
| area. Week ago I got the perfect demo of that when I filmed a
| happening at around 10pm with only some dim garden lights off
| the scene for light. Some other people had their iPhones, I had
| an f2 lens on my Z6 - while the videos from the iPhones looked
| like crap even on the phone screens, basically just blobs of
| dark noise, the Nikon produced a remarkably clean 4K video - at
| ISO 16000 or so - that made it look like the scene was lit by
| floodlights.
| alkonaut wrote:
| With video you can't "cheat". You can't gather up a second of
| frames and stack to one photo. That's what phones to quite
| well. You only have 1/24s (say) to get the shot before you
| need to get the next one.
|
| I bet an Iphone12 would have made better stills than my aging
| canon 60D which I try to keep at 1600 or lower. And that's
| pretty remarkable to be honest.
| Zak wrote:
| > _You can't gather up a second of frames and stack to one
| photo._
|
| Oh, but you can. A full second would make for a very choppy
| video of course so you don't have as long in which to do it
| as you might for a still, but there are benefits to be
| gained and the Pixel 6 series does exactly that.
|
| https://9to5google.com/2021/08/02/google-pixel-6-video-
| hdr-t...
| usui wrote:
| I don't understand, why is this ruled out? Assuming that
| phones reach a point where this is computationally cheap,
| what would be so hard about having a sliding window for
| each frame in the video?
| alkonaut wrote:
| Nothing prevents that, but if there is a lot of movement
| (which would be the point of using video) then the great
| results aren't as easy to get. You can't take a still of
| someone dancing in a dark room with an iPhone, you can
| take a still of someone standing reasonably still in a
| dark room.
| usui wrote:
| Isn't this a similar problem to the blurry-face vs. leaf-
| face issue discussed? If yes, then one would hand it off
| to computation to figure out which parts matter and need
| doctoring, except scaled up to video.
| withinboredom wrote:
| The camera moving?
| usui wrote:
| Then increase the rate at which frames get captured since
| the video is destined to be [?]24-30 fps anyway?
| floatboth wrote:
| But the higher the framerate, the less time you have for
| exposure on each frame.
| ekianjo wrote:
| the iphone 12 has no depth of field so a larger sensor is
| anyway much better. there is no way around it. You cant
| hack your way to photography.
| creato wrote:
| A shallow depth of field isn't always what you want
| though. Sometimes a shallow depth of field is good,
| sometimes it isn't.
| sudosysgen wrote:
| Oh yeah you definitely can. Multiframe mode on my old A7ii
| gets me all the way to 1/2 of a second handheld.
|
| You have a 60D, which is very old, and it would still do
| much better than an iPhone 12 with a fast lens on top.
| boogies wrote:
| > everyone asking for "unmodified raw photos" to be produced
| don't understand what they are asking for. Those "unmodified"
| photos would be unusable in most cases outside very bright
| conditions.
|
| I'm asking for unmodified raw photos _and_ modified ones. Or I
| would be if Megapixels didn't already give me both on my GNU
| /Linux smartphone. The processed versions of the photo's I've
| taken look great IMO, but I don't see the harm in keeping the
| raws around, they'd probably be quite useable for producing
| even better edited versions if I were to import them into a
| full desktop image manipulaton program and let it process them
| with desktop power and more than a couple seconds of time to do
| it. And I'd think Apple would be be happy to sell bigger
| overpriced emmcs and iCloud subscriptions to SD-slotless iPhone
| users with raw image filled phones, and/or pop up a
| notification to delete them all with a tap when space was low.
| jinto36 wrote:
| Here's a link to the image in question directly:
| https://pbs.twimg.com/media/FH0N9HNWQAE0n9R?format=jpg&name=...
| nutanc wrote:
| This is now popular enough that the owner should probably create
| an NFT out of this :)
| beervirus wrote:
| lloydjones wrote:
| I wonder whether there's a point where photographic evidence
| can't reasonably be used in court etc because of this?
|
| I'm aware that Photoshop has been around for an age.
| dmitshur wrote:
| I had a feeling of recalling something relevant to this topic
| recently. Turns out it was a video by Marques Brownlee,
| "Smartphone Cameras vs Reality!". This instance would've been a
| great example there. The section at t=364 [1] is quite relevant.
|
| [1] https://www.youtube.com/watch?v=MZ8giCWDcyE&t=364
| dcdc123 wrote:
| MKBHD made a relevant video.
|
| https://youtube.com/watch?v=VTrCPywCG5A
| 725686 wrote:
| Reminded me of the infamous Xerox copier bug that changed numbers
| in copies!
|
| https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...
| arnaudsm wrote:
| The fact that a bug this severe and overengineered took so look
| to be noticed is both hilarious and extremely scary about how
| we trust modern software.
| rrauenza wrote:
| There was an update -- sounds like a leaf was really in the shot:
|
| > Big news! I sent @sdw the original image. He theorized a leaf
| from a foreground tree obscured the face. I didn't think anything
| was in view, and the closest tree is a Japanese Maple (smaller
| leaves). But he's right! Here's a video I just shot, showing the
| parallax. Wow!
| agar wrote:
| I wish this could be pinned to the top. Perhaps a follow-up
| post to HN would be worthwhile.
|
| So many people will leave this thread incorrectly believing
| this was an iPhone problem. It could easily become the data
| point people use to cast doubt on all pictures they don't like.
| darkwizard42 wrote:
| LOL amazing that the explanation was so simple!
| roryokane wrote:
| Link to that tweet:
| https://twitter.com/mitchcohen/status/1476951534160257026.
| Nitter proxy:
| https://nitter.net/mitchcohen/status/1476951534160257026
| tqi wrote:
| This thread has been an amazing example of confirmation bias.
| tomduncalf wrote:
| I had something like this happen the either week - someone took a
| photo of a group of us against a wooden wall in low light with
| the front camera of the iPhone 13 Pro Max, and while the photo
| looked great zoomed out, if you looked closely it had applied the
| vertical line and texture of the wooden slats to my face, so I
| had a sort of wooden face!
|
| My assumption is that this is over enthusiastic ML processing
| trying to enhance photos in challenging conditions... makes me
| think of e.g. https://petapixel.com/2020/08/17/gigapixel-ai-
| accidentally-a...
|
| Similarly, I've noticed that the iPhone 13 Pro tends to use the
| wide angle lens for 3x zoom photos in low light then upscale it.
| This makes some sense as it can capture more light and the photos
| look good zoomed out, but if you zoom in, you'll see things like
| text and small faces have been replaced with random blurry "ML-
| looking" smudges.
|
| If you want to avoid this, right now I think the only way to do
| so is to use a third party camera app. I use Halide and it's
| excellent, it will only use the lens you have selected even if it
| is in challenging lighting conditions, and it applies much less
| processing (and you can further disable more processing if you
| like, I've not had the need).
|
| The default camera is generally great for snapshots that you
| aren't going to zoom into etc. though. Would be nice if they
| could tweak it to be a bit less aggressive in its processing,
| perhaps adding a "pro" mode or similar.
| albert_e wrote:
| I worry that smartphone photos will become inadmissible in
| court as evidence ... and will also give all shady characters
| like politicians a cover to change narratives
| retube wrote:
| I think we're already past that point....
| rowanG077 wrote:
| I hope that smartphones photos become inadmissable. Because
| photos taken don't reflect what really happened.
|
| I also hope smartphones start storing unprocessed version of
| every photo that will be admissable in court.
| BeFlatXIII wrote:
| I hope for the advancement of deepfake technology for exactly
| the opposite reason. All that surveillance footage and vox
| populi photos? Now they're all suspect as tampered, even when
| they're genuine.
| syshum wrote:
| Good defense attorneys already will
|
| I am more shocked that you are not concerned that image
| manipulation technology could (and likely already has) been
| used to send an innocent person who can not afford the use of
| an expert to challenge the image to jail
|
| Evidence presented in court should reflect exactly what
| happen, a true and accurate representation. Not what some ML
| algorithm believes happened. This is especially true in a
| criminal cast where a persons freedom is at stake, and often
| the captured criminality is not the focus of the image but
| rather something in the background making the ML processing
| even less reliable, and would allow for more interpretation
| by the viewer.
|
| I am a firm believer that is better for 100 guilty to go free
| than it is for 1 innocents to falsely convicted, sad that
| many do not share that world view any more
| vardump wrote:
| I wonder if this has any implications over iPhone (or cellphones
| in general) photos in court.
|
| This might be brought up to overturn any photo evidence from
| phones.
| hughrr wrote:
| All photos regardless of the source are a corruption of reality
| both semantically and technically speaking. I think they should
| always be used on a case by case basis with corroborating
| evidence.
| iamacyborg wrote:
| I recall reading a lot of fantastic essays on the "reality"
| of photos when I studied photography at A Level. I'll have to
| see if I can dig stuff up, but I recall there being a lot of
| controversy surrounding posing of bodies in early war
| photography.
| roywiggins wrote:
| Probably the most famous one:
|
| https://en.wikipedia.org/wiki/The_Falling_Soldier
| iamacyborg wrote:
| That one definitely but I was thinking of Alexander
| Gardner's Home of a rebel sharpshooter.
| post_break wrote:
| This came up big time with the Rittenhouse trial when the judge
| didn't want to use the pinch to zoom method on video of the
| event. A lot of people laughed because he talked about AI but
| he was on the right path, it does inject pixels to scale the
| image.
| dagmx wrote:
| No. He wasn't on the right path because he was talking about
| the wrong thing.
|
| It astounds me how people are so quickly conflating multiple
| different scenarios.
|
| Playback and capture are very different things. Would you be
| accepting people complaining about log4j vulnerabilities with
| regards to a C++ code base for example? No, because you'd
| know the nuanced difference that's inherent in a code base.
| Image processing is the same. It's not just all one catch all
| Boogeyman
| xdennis wrote:
| It's not clear what you're disputing.
|
| Criminal trials are generally biased towards the defense.
| If it's unclear if the evidence is doctored or not it
| shouldn't be introduced to potentially mislead the jury.
| dagmx wrote:
| I'm disputing people using this case of the iPhone
| CAPTURE process screwing up as vindication for the judges
| distrust of the PLAYBACK process.
|
| A) video scaling is a well documented and well understood
| field. The judge's mistrust was wrong to begin with, and
| if he distrusted the scaling algorithm, then there's no
| reason to trust any capture process either. It's
| reductionist if they're not going to try and understand
| what's going on.
|
| B) if the judge mistrusted it, there should be a standard
| platform for playback that can be trusted by them
| otherwise it's an easy out for dismissing any evidence
|
| C) again, people are conflating capture and playback.
| They're different, and issues in one don't translate
| directly to the other.
| fknorangesite wrote:
| ...um: https://appleinsider.com/articles/21/11/11/defense-in-
| kyle-r...
|
| Regardless of the validity of the objection in this specific
| case, yes it's already happening.
| Zak wrote:
| This seems like it might be a case of the phone combining images
| taken with two or more of its cameras.
| ronenlh wrote:
| Can this be an argument for plausible deniability in court?
| bargle0 wrote:
| How long until your phone won't let you take pictures of certain
| things?
| bendbro wrote:
| If anyone remembers the infamous "pickles" drama in the
| Rittenhouse case, this is another example of why it's so
| important to know exactly what technology is doing to a
| photograph.
|
| An aside, the judge's demonstrated understanding of pixels,
| despite being slimed by most media, was actually quite good.
| During the trial he demonstrated understanding of how an
| algorithm can take as input the original pixels and calculate
| estimated (or interpolated, whatever you want to call it
| generically) pixels from that input.
| hulitu wrote:
| londons_explore wrote:
| The cause of this is image-stacking.
|
| The phone takes ~20 frames, over 0.2 seconds. In that time, lots
| of people and things in the frame move.
|
| Optical flow is used to track all moving parts of the image, and
| then 'undo' any movement, aligning all parts of the image.
|
| Then the frames are combined, usually by, for each pixel, taking
| something like the median or throwing out outliers and using the
| average.
|
| When the optical flow fails to track an object in more than half
| the frames, the 'outliers' that are thrown out can in fact be the
| image content you wanted.
|
| It happens with leaves a lot because they can flutter fast from
| one frame to the next, so tracking each individual leaf is hard.
| A few bad tracking results on more than half the frames, and all
| you end up seeing is leaves where there should be a face..
| dylan604 wrote:
| Oflow "glitches" are some of the most fun things in image
| processing, and definitely my favorite render glitches. When it
| works, it's amazing. When it doesn't, it's also just as
| amazing, but in a different manner.
| [deleted]
| beervirus wrote:
| kgen wrote:
| This is so interesting, I how you debug something like this
| (assuming there is no copy of the original pre-processed image as
| well)?
| busia wrote:
| A.I. is watching you
| pryce wrote:
| how do we exclude the possibility that we are just seeing a leaf
| on its way falling (or blowing) between the subject that the
| photographer?
|
| Logically an event like that would be followed by the iphone not
| detecting a face, and therefore not applying its usual face-
| related black-box features?
|
| Supposing this is the case and is 'bad', what exactly do we
| expect 'better behaviour' would mean in this situation?
| function_seven wrote:
| The leaf that takes the place of the face is still attached to
| its branch. It's not falling down, it's hanging from the tree.
| And the person's head is missing in the areas you'd expect to
| still see it if it was a falling leaf.
| dathinab wrote:
| The was taken under zoom on a smartphone known to apply all
| kinds of "make thinks look nice AI magic".
|
| Both of following scenarios is possible:
|
| - Due to a combination of various post-processing steps
| including image stabilization the leaf replaced the head.
|
| - The leaf is on a branch which is above the head and was
| pushed down by the wind hiding the face, due to the unusual
| angel of the branch that is non-obvious. And the artifacts
| around the "face" come form the image
| sharpening/stabilization magic not knowing what to do with
| that pixels.
|
| - The leaf might also be falling and it's connection to the
| branch can be an optical illusion, that kind of leafs
| sometimes have a "mini branch part" attached to them when
| they fall.
|
| I would say both is likely, and given that it was supposedly
| done under high zoom I wouldn't be sure if the human doing
| the photo can/does see it correctly, because our brain also
| does "magic" to make the things we see look "better" (like
| our brain actually fills in details from memory/eperience in
| some situations).
| netcan wrote:
| When I was 12 I learned to make autocorrect replace my 8 yr old
| brother's name with "poophead." It was a genius prank.
|
| If I worked at Apple today, I'd do the same thing with his 30
| year old head.
| nla wrote:
| Welcome to computational photography!
| anigbrowl wrote:
| Probably because she is Canadian
| micheljansen wrote:
| This gives me the same feeling as those ML-powered "enhanced
| zoom" features: where does the photograph end and the machine
| made-up fantasy start?
| nitrogen wrote:
| AI enhancement of imagery injecting made-up data was a minor
| plot point in the novel Congo (the one about diamonds and
| apes), if I recall correctly, so this has been a known risk for
| a long time, and ML educators and practitioners aren't doing a
| good enough job of managing and talking about that risk.
| gsliepen wrote:
| Obligatory Red Dwarf reference:
| https://www.youtube.com/watch?v=2aINa6tg3fo
| 14 wrote:
| Take my upvote. This has been posted to HN before but I love
| seeing it again. Red Dwarf was truly an amazing show way
| ahead of its time imo. For those who have never heard of it,
| it is a space based comedy I would recommend to all.
| chris_wot wrote:
| Definitely the best spoof of Blade Runner I've seen so far.
| sergiotapia wrote:
| https://www.youtube.com/watch?v=T5tIDad3xxs
|
| I love these guys so much
| dathinab wrote:
| "ML-powered" enhanced zoom can't be trusted.
|
| The moment you need ML magic to magnify you are basically
| telling the ML "guess what is there based on what we can see
| and the training database you where trained on". Also due to
| the nature of such algorithm they do not have any form of
| common sense or "realistic general purpose" world view, and
| tend to be black box so it's hard to check if they learned
| something "stupid"/"ridiculous".
|
| So it's totally possible that somehow it learned to place
| knifes into the hyper-magnified hands if they have a specific
| skin color and a turquoise sleeve on a picture with a color
| scheme as if it's a cloudy day. Or similar arbitrary things.
| Sure it's supper unlikely but not impossible and there is not
| good way to find such "bugs". Worse multiple ML systems trained
| on the same data might share some mistakes and there are just
| that many huge comprehensive datasets.
| andbberger wrote:
| OK sure but the distribution of natural images is highly
| redundant ie pixels are not independent and there is
| structure to exploit. It's not magic obviously but done
| properly it's a pretty reasonable thing
| MauranKilom wrote:
| > OK sure but the distribution of natural images is highly
| redundant ie pixels are not independent and there is
| structure to exploit.
|
| Sure, but that depends _very_ strongly on the number of
| natural images you trained on and the specificity of the
| situation you present to the algorithm. Maybe one such
| structure in the training set just happens to be people
| holding knives given "a specific skin color and a
| turquoise sleeve on a picture with a color scheme as if
| it's a cloudy day".
| andbberger wrote:
| I said if done properly. My point is that in principle
| using a natural image prior to 'enhance' an image is not
| a totally crazy thing.
| dathinab wrote:
| It isn't (EDIT: isn't crazy) .
|
| But there is no good way to reliably know if it's done
| well for this specific image.
|
| So it can't be trusted.
|
| Which doesn't mean it's worthless.
|
| It's kinda like someone saying "I think I maybe have seen
| <something> when passing by but I didn't pay attention."
|
| It's not trustable as the human mind if fickle but it can
| be a good starting point.
| Dah00n wrote:
| Sure, if it is an option applied after the fact by the user
| editing the photo. Any camera that does this automatically
| is utterly broken. Otherwise where is the line? Some
| sharpening? Adding bigger boobs by default? Make people
| less black?
| peterkelly wrote:
| Autotune for photography.
| lattice_epochs wrote:
| This just reminds me of the early iPhone panorama stitching.
| aloer wrote:
| I upgraded from an iPhone X to a 13 pro when it was released and
| since then it's been a very mixed experience. The range of
| quality is incredibly frustrating.
|
| I enjoy the speed with which I can take pictures that are not
| blurry. But in most cases I wish I could reduce the "AI
| aggressiveness" since they all look so artificially sharp. But
| that's only part of it.
|
| I have pictures of my family this Christmas where we posed in the
| same location, same light and with just a few seconds apart and
| the pictures look completely different when it comes to the
| colors.
|
| "Different AI interpretation" I tell those around me that haven't
| come in contact with computational photography yet.
|
| Then I apologize, saying that I can't know how the picture will
| turn out before it's taken. And that there is no way to
| reanalyze. No way to feed RAW back to "the AI".
|
| But such is the future...
|
| There's a picture of me and my partner against a perfectly blue
| sky. I am a few cm behind her. She looks normal while my face
| looks half transparent / whitened like a ghost in the sky.
|
| There's pictures of text and logos on billboard and such that are
| so smooth as if they are photoshopped on top.
|
| I often tell those people that not only do we need to accept that
| photos now are not the same concept as photos in the past, we
| also have to accept that the next iOS update could change the
| algorithm and photos from this Christmas will be different to
| photos from next Christmas even when shot with the same camera.
| Frustrating
| dehrmann wrote:
| Remember that bit in the Kyle Rittenhouse trial about image
| resizing?
___________________________________________________________________
(page generated 2021-12-31 23:02 UTC)