[HN Gopher] Super Resolution
       ___________________________________________________________________
        
       Super Resolution
        
       Author : giuliomagnifico
       Score  : 269 points
       Date   : 2021-03-17 06:53 UTC (1 days ago)
        
 (HTM) web link (blog.adobe.com)
 (TXT) w3m dump (blog.adobe.com)
        
       | bryanlarsen wrote:
       | This sounds similar to what Nvidia's Deep Learning SuperSampling
       | (DLSS) is doing _in real time_. It boggles the mind.
        
       | noahatwi wrote:
       | There are many open source projects on github which achieve
       | comparable results (with the added flexibility of being able to
       | train your own model).
        
       | sturza wrote:
       | CSI technology
        
       | zokier wrote:
       | I would have liked if they had more comparisons to ground truth
       | images instead of resampled ones. The foilage and bear
       | comparisons also look like the "super resolution" images had
       | contrast boosted, which is either awkward artifact from the
       | scaling or misleading post/pre-processing.
        
       | [deleted]
        
       | ISL wrote:
       | Previous discussion on the subject, five days ago:
       | https://news.ycombinator.com/item?id=26448986
        
       | gitpusher wrote:
       | ENHANCE!!
        
       | bobsoap wrote:
       | Finally, reality catches up with spy movies and police-procedural
       | TV shows.
        
       | turrini wrote:
       | How is this different to Gigapixel AI ?
       | 
       | https://topazlabs.com/gigapixel-ai/
        
         | sosuke wrote:
         | I enjoyed this review
         | https://stephenbayphotography.com/blog/adobe-super-resolutio...
         | 
         | Both of the options had winners.
        
           | zokier wrote:
           | That is really good comparison, although there might bit of
           | confirmation bias on my part there. For most parts I felt
           | like the improvements are not really worth it compared to the
           | artifacts when it fails. The rusty car texture is maybe the
           | one place where it seemed to really make a distinct
           | improvement.
        
         | kthartic wrote:
         | Does it have to be? I think the point is it's built into Adobe
         | products now, which will be fantastic for Photoshop/Lightroom
         | etc users.
         | 
         | Unless ofc you're genuinely curious as to how it's different to
         | Gigapixel and not just knocking Adobe :)
        
         | groby_b wrote:
         | Looking at results so far, it's strictly better. (Less noisy,
         | less subjective errors on fine detail)
         | 
         | If I want to up-rez strictly for printing purposes, the PS one
         | looks like the winner. But, obviously, it's subjective.
        
       | psychomugs wrote:
       | A great misinterpretation of the photography is that it's an
       | objective medium. Any combination of lenses and film stocks (or
       | equivalent) is going to represent but a flat, skewed
       | representation of the three-dimensional world; it's been
       | interpreted before anyone performs any processing, computational
       | or analog.
       | 
       | Susan Sontag's "On Photography" is a great read on this topic for
       | anyone marginally interested in not just photography, but art in
       | general.
        
         | LegitShady wrote:
         | >Any combination of lenses and film stocks (or equivalent) is
         | going to represent but a flat, skewed representation of the
         | three-dimensional world
         | 
         | this is no different than any other sensor. It doesn't mean
         | replacing data with guesses is better than the sensor
         | representation of the world, which is the issue at hand today.
        
           | strogonoff wrote:
           | Interesting thing's happening here.
           | 
           | Previously there was a clear line between scene-referred
           | image data, which was treated as objective record of a 2D
           | slice from a 3D world by way of measuring light, and output-
           | referred image data--one of the countless lossy adaptations
           | of that data to fit the limitations of some particular medium
           | (display, paper, etc.) in order to be actually viewed.
           | 
           | The scene- to output-referred data conversion is where
           | objectivity inevitably went out the window, but not earlier--
           | the original scene-referred data was mostly treated as
           | immutable.
           | 
           | What these guys are doing actually happens at the demosaicing
           | stage, and from what I understand the resulting "super
           | resolution" image is still scene-referred--but it isn't
           | representing the actual captured light anymore! In other
           | words, we'll have raw images that are partially "guesses" and
           | no longer an objective record.
           | 
           | This isn't necessarily good or bad, but is somewhat of a
           | paradigm shift I'd say.
           | 
           | As a side note, I wish Adobe released the mechanism so that
           | it could be made one of the demosaicing methods available in
           | open-source raw processors, but I take it this won't be
           | likely.
        
       | nailer wrote:
       | One of those happy moments where science fiction becomes real:
       | https://blog.adobe.com/hlx_ea7b90bf2b9492a9fdfdcbe74b3197ca1...
        
       | choppaface wrote:
       | I've used this a few times already and it works perfectly about
       | 80% of the time. If your picture has a lot of grain or low-level
       | noise it's just going to make the noise worse, then you throw in
       | a median to de-noise and you've lost the benefit. But it's
       | otherwise a nice tool to have, especially for older low-res
       | photos (like 800x600 stuff you want to print).
       | 
       | I think longer term stuff like neural rendering will make super
       | resolution less relevant. If you can re-create a 3D scene from a
       | single photo or otherwise reconstruct the photo in a less-
       | resolution-dependent way, then playing the super resulting game
       | is less interesting (for users and researchers alike).
        
       | natch wrote:
       | Pixelmator Pro (happy customer here) has great superresolution
       | without all the cloud subscription baggage. I think it's fair to
       | make comparisons, which I will leave to those who have CC
       | subscriptions, but anyone doing so should realize that Adobe is
       | being compared to a moving target as outside options and even DIY
       | options are only getting better.
       | 
       | Yes we're not talking about accuracy here, just perceived
       | resolution, no need to hammer on that.
        
       | TheMagicHorsey wrote:
       | When I see software enhanced photography like this, and look at
       | the relatively primitive processing that is happening on my DSLR
       | and my high priced mirrorless cameras, I realize that despite
       | their huge sensors and amazing glass (which I paid a small
       | fortune for), they will soon be outclassed by the simple
       | smartphone in my pocket.
       | 
       | My wife routinely shoots photos on her Pixel 3 that get a better
       | response on our family whatsapp group than the painstakingly
       | post-processed DSLR shots I create and post.
       | 
       | This could be an indictment of my failures as a photographer. Or
       | perhaps my family has no taste in photos. But it's also entirely
       | possible that a Pixel 3 is all the camera you really need for
       | family documentary work ... and I've wasted so much money on
       | unnecessary hobby gear.
        
         | zokier wrote:
         | Well, is it a hobby, or is it family documentary? Feels like
         | similar sentiment if you said that money spent on woodworking
         | gear is wasted because family prefers IKEA furniture.
        
       | nnmg wrote:
       | For anyone interested in 'real' super resolution, we use these
       | techniques to overcome the diffraction limit in microscopy (my
       | field is neuroscience):
       | 
       | - https://en.wikipedia.org/wiki/Super-resolution_microscopy
       | 
       | - Stimulated emission depletion microscopy (STED):
       | https://en.wikipedia.org/wiki/STED_microscopy
       | 
       | - stochastic optical reconstruction (PALM/STORM)
       | 
       | - structured illumination microscopy (SIM)
       | 
       | Here is one of my favorite STED imaging papers, looking at the
       | skeleton of neurons:
       | https://www.sciencedirect.com/science/article/pii/S221112471...
        
         | kicat wrote:
         | Yes, I'm disappointed by the name collision. Is there a tool
         | that makes "real" superresolution photos easy? Is that built
         | into photoshop as well under a different name?
        
           | nnmg wrote:
           | unfortunately not, for real superresolution (i.e. resolving
           | below the diffraction limit of light) all the current methods
           | require expensive (and very dangerous) lasers and microscopes
           | with all sorts of optics widgets, mirrors and computers. Lots
           | of 'high resolution' imaging things are available for
           | cameras, as well as some AI systems that will make up data
           | for you so it looks better too!
        
         | fluidcruft wrote:
         | Wow! The 3D-Sim of the nuclear envelope at the wikipedia
         | article was amazing but seeing the structure in the neuron is
         | astounding. Do you know how long imaging takes for this? I
         | assume the post-processing is slow but will video be possible
         | someday?
        
           | nnmg wrote:
           | I am least familiar with SIM, but you can do live-cell SIM
           | imaging for sure. The processing is a bit computationally
           | intensive but not so bad (and can always be done post-hoc).
           | 
           | The big thing you have to look out for is the light intensity
           | killing the cells or bleaching your signals. One of our
           | collaborators is actively working on on-the-fly SIM
           | processing for live cell imaging.
           | 
           | A quick glance at pubmed it looks like 11Hz was do-able
           | several years ago
           | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2895555/ and
           | this (sorry paywalled, I've heard sci-hub has the paper...)
           | https://pubmed.ncbi.nlm.nih.gov/30478322/
           | 
           | STED is promising for live imaging too. Lots of beautiful
           | pictures out there!
        
         | knuthsat wrote:
         | Another "real" technique I've seen is having sensors that do
         | not have the individual phototransistors layed out in a nice
         | periodic grid pattern but with aperiodic pattern like Penrose
         | tiling (put PTs as vertices of kites/darts).
         | 
         | The interpolation techniques (making a photo out of hardware
         | input) manage to get 10-15x more resolution out of that sensor
         | layout compared to normal grid.
         | 
         | You also avoid Moire patterns with that too.
        
       | beyondcompute wrote:
       | I wish they now added a subscription plan or a pricing model for
       | people who use the software several hours per month.
        
         | rhacker wrote:
         | Back in the day when they first had their sub model it was
         | soooo much cheaper than it is now :(. Here's hoping some guy
         | that writes super-resolution ML models on the side is a big fan
         | of Gimp.
        
           | dwrodri wrote:
           | On a semi-serious note: would GIMP accept a merge request
           | that depends on something like LibTorch[1] or the huge binary
           | blobs like the ones produced by TFLite? I say this as a
           | enthusiastic hobbyist in the ML space who is always looking
           | for productive ways to procrastinate my grad research...
           | 
           | [1] = https://pytorch.org/tutorials/advanced/cpp_export.html
        
             | rhacker wrote:
             | I randomly just found this but I don't know if it's
             | available or where it's hosted:
             | 
             | https://deepai.org/publication/gimp-ml-python-plugins-for-
             | us...
             | 
             | edit:
             | 
             | https://github.com/kritiksoman/GIMP-ML
        
             | gmiller123456 wrote:
             | GIMP allows plugins for features that don't belong in the
             | main application. I've found a lot of researchers had
             | written GIMP plugins for their work, I am under the
             | impression that that's how many of them are testing.
        
           | lorenzhs wrote:
           | There are non-Adobe alternatives with ML super-resolution
           | features that are much cheaper. For example, Pixelmator Photo
           | can do it for $8 (iPad, no subscription). They have a Mac
           | version as well but I don't use macOS so I can't judge that.
        
       | heroprotagonist wrote:
       | This sounds like the same approach as Gigapixel AI from Topaz
       | Labs.
       | 
       | I haven't tried Gigapixel but I have used Topaz' Video Enhance
       | AI, which is phenomenal. I've been using it to upscale old TV
       | shows which never got an HD remaster, to UHD.
       | 
       | Right now it's running through the first episode of Firefly,
       | converting from 540p to 2160p (540p as the bluray rip was
       | basically upscaled to 1080p from its original production, so I
       | converted to 540p first in Handbrake with zero noticeable loss in
       | quality since I used a near-lossless compression factor.. this
       | provides better upscaling):
       | 
       | https://i.imgur.com/hcRYM5n.jpg
       | 
       | When it's done I'll run it through Flowframes for framerate
       | interpolation. Then maybe another pass in Handbrake to figure out
       | an optimal size for the end file.
       | 
       | Then I'll run through the rest of the season using the same
       | settings I tested with this first episode.
        
         | zokier wrote:
         | I might be old and grumpy, but I prefer the left image to the
         | right one? The right one is watercolory and overly smoothed,
         | like a cheap beautify photofilter
        
         | parhamn wrote:
         | That's cool, thanks for sharing. How long does a conversion
         | like that usually take?
        
           | PaulBGD_ wrote:
           | In my personal experience with Topaz and a 3060 ti, usually
           | 4-5 frames per second. Although it depends on the input and
           | output resolutions.
        
         | tokamak-teapot wrote:
         | It looks like the character changed into a silk version of his
         | shirt with no chest pocket.
        
           | heroprotagonist wrote:
           | Yeah, I'm not sure I have the settings quite right on this
           | one yet. There are several AI models to choose from to get an
           | optimal result, some of which have configurable parameters to
           | control for this.
           | 
           | I noticed with this model that really fine lines will have a
           | tendency to get smoothed out a little. There's a similar
           | model which should pull a little more detail but typically
           | this one seems to work best. It's less noticeable once the
           | video is in motion, compared to a still image.
           | 
           | I also probably removed too much grain from this, hence the
           | more 'silky' look. It's nice for skin but less so for
           | textures.
        
         | liuliu wrote:
         | Videos can use inter-frame information to help infer sub-pixel
         | details.
         | 
         | This post about "Super Resolution" is interesting because it
         | starts with RAW format (which contains information about camera
         | sensor arrangements), hence, the machine-learned model should
         | not only memorize artificial details (what hair should be look
         | like, what a tree-leaf should be look like etc, and use that to
         | "hallucinate" a higher-resolution details, I liked to call this
         | "hallucination" for that reason), but also relationship between
         | complex interference of different sensors in their
         | corresponding arrangements.
         | 
         | You can read more about RAW format and why exposing RAW format
         | for photography is exciting (on everyday's camera, i.e. your
         | phone) from this post: https://blog.halide.cam/understanding-
         | proraw-4eed556d4c54
        
       | sbarre wrote:
       | We talked about this a few days ago?
       | 
       | https://news.ycombinator.com/item?id=26448986
        
       | BiteCode_dev wrote:
       | Super resolution is adding data that is not here, it's asking an
       | algo to produce part of the art.
       | 
       | At one point do we go from picture to painting ?
        
         | alextheparrot wrote:
         | Anyone who edits photos in Lightroom understands that modern
         | photography captures both reality and artistry.
         | 
         | My goal when editing photos is to make them more clearly
         | express how I felt or how I saw. This is often quite divorced
         | from what shows up on the back screen of my camera.
        
           | l-lousy wrote:
           | It's also very useful to be able to take a tighter crop of an
           | existing image without worrying about pixelation
        
           | perardi wrote:
           | I think photography has always been about artistry. You are
           | taking a 2-dimensional slice of a 4-dimensional world.
           | Interpretation is inherent.
           | 
           | Ansel Adams was surely no stranger to post-processing.
           | 
           | https://photofocus.com/photography/a-look-inside-ansel-
           | adams...
        
             | yccs27 wrote:
             | There is a difference between normal post-production
             | (cropping, defining colors, adjusting brightness, removing
             | noise, ...) and adding details that just aren't there in
             | the raw image.
        
               | fastball wrote:
               | How so?
        
               | mrob wrote:
               | One involves geometric meaning (e.g. edges, Bayer
               | artifacts, clipped highlights), and the other involves
               | semantic meaning (e.g. sky, faces, eyes). Dodging and
               | burning is in the latter category and is equally
               | deceptive.
        
               | fastball wrote:
               | Noise removal isn't semantic meaning?
        
           | wlesieutre wrote:
           | That was true of film photography and development too
        
         | pfortuny wrote:
         | A good experiment (which I cannot do): take a true photo of
         | somebody, reduce it in size and then "super-resolve" it. How
         | much must you reduce in order to produce a different "person"
         | (or a non-person).
        
           | Kelteseth wrote:
           | You can also repeat this indefinitely: 1. Scale down 2.
           | "Super-resolve" 3. Goto 1 In Germany, we have this game
           | "stille post" where you whisper someone a phrase and then
           | multiple children try to whisper the exact phrase to the
           | next. Most of the time a completely different phrase comes
           | out in the end.
        
             | PEJOE wrote:
             | In the US we also play the same game, but call it
             | "Telephone."
        
               | GeneralTspoon wrote:
               | And we call it "Chinese Whispers" in the UK
        
         | 6gvONxR4sf7o wrote:
         | Presumably you go from picture to painting the moment you open
         | photoshop.
         | 
         | Photography for record keeping/science and photography for
         | aesthetics/artistry diverge long before you get to techniques
         | like super resolution. Which is still a fuzzy boundary because
         | anyone who has taken a picture including a sunny sky can tell
         | you raw photos generally don't capture how it looks to your
         | eyeballs.
        
         | dbrueck wrote:
         | Yeah, although technically the goal behind things like 'super
         | resolution' is to add data that is not there _anymore_ but
         | _probably was there before_.
         | 
         | i.e. you had the original scene that was captured by a digital
         | camera (a lossy operation) and then saved as an image file
         | (often also a lossy operation), and then a tool like this makes
         | an educated guess as to what information was lost in the 1st
         | and 2nd steps.
        
           | roywiggins wrote:
           | You have to demosaic things _anyway_ (as mentioned in the
           | article) so it 's not like you can escape algorithmic fudging
           | of details.
        
         | advisedwang wrote:
         | My worry is when this gets used for something forensic.
         | 
         | I imagine this will tend to reproduce things in the dataset,
         | e.g. up-scaling blurry text may look like fonts that it has
         | memorized more than the original. Or upscaling a feather will
         | provide details like the feather of more common birds. Or
         | upscaling blured out numbers will pick some numbers at random
         | [1].
         | 
         | We need to make sure people don't rely on these details, e.g.
         | in courts, HR reviews, when reddit sleuths try and investigate
         | an incident, when someone looks for cheating partners etc.
         | 
         | [1]
         | https://www.theregister.com/2013/08/06/xerox_copier_flaw_mea...
        
           | BiteCode_dev wrote:
           | Yeah, the "uncrop" joke from red dwarf doesn't seem like a
           | joke anymore (https://www.dailymotion.com/video/x2qlmuy).
           | 
           | Except if it's used in security camera, it's going to be a
           | disaster. Those things have super low resolution, and
           | software will be cheaper than upgrading. And models have huge
           | bias.
           | 
           | Tons of fun.
        
           | andi999 wrote:
           | Gosling will probably go to jail for a lot of crimes...
        
           | natch wrote:
           | You're worrying about the wrong thing. In formal settings,
           | the problem will be taken care of.
           | 
           | The bigger problem is informal settings. Propaganda, for one.
        
             | pfortuny wrote:
             | Although you have already replied to the DNA comment, one
             | must take care with bureaucracies always. They will drag on
             | with some preconception and the citizen always loses.
        
             | BiteCode_dev wrote:
             | The way DNA testing is misused, I wouldn't be so sure about
             | formal settings.
        
               | natch wrote:
               | Good point! You have me reconsidering what I said.
        
           | michaelpb wrote:
           | Yeah, so much of forensic "science" is notoriously flawed,
           | that this seems like a likely addition the usual junk-science
           | pantheon of "gunshot analysis, footprint analysis, hair
           | comparison and bite mark comparison" -
           | https://innocenceproject.org/forensic-science-problems-
           | and-s...
           | 
           | Personally, I think one reason we still do this has a lot to
           | do with detective shows being so ridiculously popular that
           | people think that it's some sort of scientific process, when
           | it's not. As a thought experiment, we'd probably have flat-
           | earther-ism be the dominant belief if was like 15/20
           | broadcast TV shows are dedicated to glorifying flat-earther-
           | ism. This means the most dangerous thing about "Super
           | Resolution" is the public has already been "primed" with cop
           | shows having the "enhance" feature.
        
           | xadhominemx wrote:
           | I am not worried about it this at all. It's not likely to
           | randomly incriminate a real person.
        
             | speeder wrote:
             | I am from Brazil.
             | 
             | On the first day of trials of deep-learning based facial
             | recognition here, a random person was arrested because the
             | algorithm confused that person with another one.
             | 
             | Even more stupid, is that the person with "outstanding
             | warrant" was actually ALREADY in prison.
             | 
             | So yes, AI managed to arrest the same person, twice, one
             | time the real person, one time a random look-alike.
        
               | xadhominemx wrote:
               | Seems like a different situation - the facial recognition
               | algo was wrong. Not the same as an AI resolving a face
               | that resembles someone and then prosecuting that person
               | on the basis of the image.
        
               | koluna wrote:
               | You realize that the idea is the same, right? AI made an
               | incorrect determination and people ran with it.
        
             | sennight wrote:
             | I remember that time a guy's life got turned upside down
             | because his fingerprints "matched" those found on a bomb.
             | Despite the fact that he had no motive, disposition, or
             | access - they were determined to convict, because to do
             | otherwise would be to admit that fingerprint analysis
             | doesn't enjoy the scientific foundation that DNA evidence
             | does.
             | 
             | So yeah, real people have been harmed by bad matching
             | algorithms.
        
             | fsflover wrote:
             | Yes, it is: https://news.ycombinator.com/item?id=26504144
        
               | xadhominemx wrote:
               | Ok that's pretty far removed from a prosecutor relying on
               | that image though
        
             | vbezhenar wrote:
             | Imagine low-resolution face photo. Police found 3 men whose
             | faces are similar to that photo. Now they're super-
             | resolutioning that photo and suddenly first person looks
             | very similar to it, while second and third persons do not.
             | First person is in danger because of algorithm which was
             | trained on some specific photos.
        
               | xadhominemx wrote:
               | One can imagine a lot of future scenarios. In fact that
               | describes is an entire genre of story writing known as
               | "science fiction"
        
             | michaelpb wrote:
             | Out of curiosity, do you think forensic science methods in
             | general are likely to randomly incriminate people?
        
         | kevincox wrote:
         | It is _sort of_. Their argument is that by taking into account
         | the subpixel pattern of the sensor they can actually extract
         | more detail than is readily visible in the picture.
         | 
         | Basically you can imagine that the blue subpixel is always to
         | the top-left of the pixel. If you shifted the blue down and
         | right one half a pixel you would have a more "accurate"
         | production. In this way you can add a new pixel with the blue
         | value closer to the right spot, then interpolate the blue of
         | the original.
         | 
         | Of course you can also do logic such as detecting lines of
         | different lightness and applying those on top.
         | 
         | So yes, especially with their machine learning they are adding
         | new detail, but that is also likely some detail that was
         | already there, but could not be conveyed with with the lower
         | resolution. I wonder how different this would be from the
         | simple approach of realigning the subpixels on a higher-
         | resolution image and interpolating the "missing" subpixels.
         | This approach may look better but wouldn't add any data.
        
         | TehCorwiz wrote:
         | Photography has always been more artistry than reality. From
         | film selection, lens choice, framing and cropping, color
         | correction, "dodging" and "burning" details in our out it's
         | always been a compromise between what the camera sees (which
         | itself may not be reality if you're using physical in-front-of-
         | the-lens filters or other distortive techniques) and what the
         | photographer want to convey. That's one of the reasons that
         | Photographic Journalism often holds itself to ethical
         | limitations with regards to how a photo can be manipulated
         | after it's been taken. I'm almost certain that this kind of
         | thing would at least walk that fine line if not fall right on
         | over it.
        
           | falcrist wrote:
           | On one hand as Ansel Adams said "You don't take a photograph,
           | you make it.", and that still holds true today. You're
           | influencing the picture when you choose your camera, your
           | lens, your sensitivity (ISO), your color mode, your white
           | balance, your raw software, your editor, etc etc all the way
           | from the camera to the print.
           | 
           | On the other hand, there certainly _is_ a difference between
           | working with the information (pixels) you 've captured, and
           | inventing information by either drawing on the image or
           | creating new data.
           | 
           | This methodology falls squarely into the gray area between
           | those two.
        
           | diarrhea wrote:
           | Exactly. That is also why claiming to only do "SOOC JPG" is
           | ill-informed. People claiming it don't understand the
           | process: to even produce a JPG, there _has_ to be a process
           | of interpretation already taking place.
           | 
           | When I volunteered for a small newspaper I did my usual,
           | sometimes significant, editing (Lightroom-level, not PS) and
           | didn't see anything wrong with it (neither did they; of
           | course edits look better than plain JPGs). But I can
           | appreciate how this becomes much more important with
           | increasing range. Imagine if Pete Souza spoiled 8 years'
           | worth of Obama presidency imagery just because he edited them
           | in some obscure way.
        
           | mikepurvis wrote:
           | Never mind the manipulations which can occur even before the
           | shutter is ever pressed-- I'm having troubling finding links
           | now, but I know there was controversy a few years ago about a
           | picture of bombing rubble with an object (a dress form or
           | something, maybe?) standing up in the middle of it,
           | apparently by chance which created a very artistic contrast,
           | but there were later questions about whether the photographer
           | had in fact arranged the object in that way rather than
           | discovering and capturing a preexisting scene.
           | 
           | EDIT: Still can't find it, but here's a list which includes a
           | number of other wartime photos which are proven or suspected
           | to have been staged in various ways:
           | https://militaryhistorynow.com/2015/09/25/famous-
           | fakes-10-ce...
        
         | ramraj07 wrote:
         | Not true, there's always far more data in an image than a quick
         | glance can decipher, astronomy field obviously pioneered
         | numerous methodologies over the decades (many of which
         | biologists do more shittyly, speaking as a biologist who did
         | so). In the end I'd argue a lot of these "super resolution"
         | methods are just glorified dexonvolution, not that it's a knock
         | on these methods but apparently it's not cool to call
         | deconvolution deconvolution anymore.
        
           | nnmg wrote:
           | Most are, but in biology/physics STED[1] and STORM are
           | physics based methods for overcoming the diffraction
           | limit[2]. STED is pure physics, no math/deconvolution/AI
           | tricks.
           | 
           | [1] https://en.wikipedia.org/wiki/STED_microscopy
           | 
           | [2] https://en.wikipedia.org/wiki/Super-resolution_microscopy
        
             | ramraj07 wrote:
             | They use extra tricks at the image capture level to
             | supercharge how much information you can load into the
             | captured images (and then decipher them), but the methods
             | are still related at least in STORM - you're effectively
             | deconvolving lots of sparse images and then merging them!
             | Gaussian fitting of point sources is literally
             | dexonvolution right? You're just estimating the psf as a 2d
             | Gaussian!
        
               | nnmg wrote:
               | I am not qualified to get too in the weeds on the
               | physics, but 'Resolution' is... complicated. Usually,
               | when we talk about resolution we are talking about the
               | ability to distinguish two points.
               | 
               | The 'resolution limit' (Abbe diffraction limit [1]) is
               | related to a few things, but practically by the
               | wavelength of the excitation light and the numerical
               | aperture (NA) of the lens (d = wavelength/2NA). When we
               | (physicists/biologists) say 'super resolution', we mean
               | resolving things smaller than what was previously
               | possible based on the Abbe diffraction limit. So rather
               | than only being able to resolve two points separated by a
               | minimum of 174nm with a 488nm laser and a 1.4NA
               | objective, we can resolve particles separated by as
               | little as 40-70nm with STED (but it varies in practice).
               | 
               | STED _does not_ accomplish this by estimating PSFs and
               | fitting Gaussians, it uses a doughnut shaped depleting
               | laser to force surrounding fluorescence sources to a
               | 'depleted' state, and an excitation laser to excite a
               | much smaller point in the middle of the depletion (see
               | the doughnut in the STED wikipedia page, Stephen Hell and
               | Thomas Klar won the Nobel Prize in Chemistry for this in
               | 1999 [2].
               | 
               | I know PALM/STORM uses statistics, blinking fluorescence
               | point sources, and long imaging times to build up a super
               | resolution image based on the point sources and
               | computational reconstruction.
               | 
               | Not as familiar with that one or SIM, but I know the
               | "Pure physics/optics" folks I work with regard STED as
               | the most pure physics based one that doesn't rely on
               | fitting, deconvolution, or tricks (not that any of that
               | is bad or wrong!).
               | 
               | [1] https://en.wikipedia.org/wiki/Diffraction-
               | limited_system#The... [2]
               | https://en.wikipedia.org/wiki/STED_microscopy
        
           | xadhominemx wrote:
           | No these are not deconvolution at all. They are using AI to
           | add information to the image.
        
             | ramraj07 wrote:
             | Superresolution in consumer cameras is definitely adding
             | detail to the image using prior information about the
             | universe (hair has a pattern, what looks like an edge has
             | to be sharper than what the image says, etc). This is
             | definitely the questionable artsy aspect of this superres
             | boom. But the question I answered was more specific which
             | supposed how you can make up info which doesn't exist, but
             | that's definitely not true either. Modern superres tech
             | (especially the non-deep learning kind) can extract more
             | info from the image if you systematically account for the
             | psf of the camera, distortions etc.
        
         | sweetheart wrote:
         | It's always been more like painting than anything resembling a
         | truthful representation of reality. In fact, the first ever
         | photograph [1] much more closely resembles a painting than
         | anything else, which is ironic.
         | 
         | All advancements have simply given us more control in how
         | painterly we render our photographs, but have never _really_
         | brought us closer to the truth.
         | 
         | [1]:
         | https://en.wikipedia.org/wiki/History_of_photography#/media/...
        
         | sosuke wrote:
         | Ansel Adams answering a similar question
         | https://youtu.be/Ml__B0l9GIs?t=1514
        
       | Aeolos wrote:
       | > "I have a early-2009 "octo" Mac Pro [...]" > > OS: macOS
       | 
       | Does this make anyone else a bit uncomfortable?
       | 
       | I don't think MacOS is still receiving security updates on that
       | hardware. I'm all for using old hardware for as long as it keeps
       | working, but I would never browse the internet with a vulnerable
       | OS on a vulnerable processor (spectre etc...)
       | 
       | Or am I missing something?
        
       | jiveturkey wrote:
       | > Using Super Resolution is easy -- right-click on a photo (or
       | hold the Control key while clicking normally) and choose
       | "Enhance..." from the context menu say "Enhance"
        
       | arnaudsm wrote:
       | Every time someone brings up Super-resolution, I like to pull up
       | this hilarious example :
       | https://petapixel.com/2020/08/17/gigapixel-ai-accidentally-a...
       | 
       | Super-resolution is only guessing. It's ok for art, not for
       | critical tasks.
        
         | jonas21 wrote:
         | Lots of things are "only guessing." Auto color correction is
         | only guessing. Unsharp mask is only guessing. Smart selection
         | is only guessing. Content-aware fill is only guessing.
         | 
         | They're still useful tools to have in your toolbox as a
         | photographer or designer, even for critical tasks, and I don't
         | really see how this is different. There may be certain failure
         | cases, but everything has failure cases.
        
         | ryanwhitney wrote:
         | >Photographer Jomppe Vaarakallio has been a professional
         | retoucher for 30 years...
         | 
         | >To be clear, this isn't a knock on the Gigapixel software.
         | Vaarakallio tells PetaPixel that the software is "amazing" and
         | he uses it all the time.
        
           | danShumway wrote:
           | I don't think that it's a knock on the software, I think it's
           | a knock on the common interpretation of what that software is
           | doing.
           | 
           | Professional photo retouching is art. It's okay to use
           | Gigapixel for an artistic task, it's not OK to use it to
           | enhance a photo that you're going to show to a jury. That's
           | what GP means by 'critical': use cases where it matters
           | whether or not the pixels being added map to an objective
           | reality rather than an algorithmic guess about what would
           | look good.
        
         | natemo wrote:
         | Nitpicky, but I think "It's okay for x, not for y" when
         | describing nascent technology is a bit shortsighted.
         | 
         | Who knows how this evolves and what new applications people may
         | devise? For today, I agree: it's just art.
        
         | porphyra wrote:
         | A personal favourite is white Obama:
         | https://www.theverge.com/21298762/face-depixelizer-ai-machin...
        
         | timthorn wrote:
         | There are forms of super-resolution that certainly aren't
         | guessing. For example, you can take a video of a subject and
         | integrate over time, so that the motion of the subject over the
         | sensor allows you to infer sub-pixel detail.
         | 
         | https://www.cs.huji.ac.il/~peleg/papers/icpr90-SuperResoluti...
        
           | liuliu wrote:
           | They started their model with RAW format, so the model should
           | encoded some interactions between red / blue / green light
           | sensors and that can help generate genuine sub-pixel details.
           | OTOH, this is machine learning, unless you specifically has
           | some discriminators (just an idea) to counteract, you don't
           | really know how much these are genuine sub-pixel details and
           | how much are hallucinations.
        
           | Judgmentality wrote:
           | Turning a regular video into a super photo is different from
           | turning a regular photo into a super photo.
        
         | sorenjan wrote:
         | > Super-resolution is only guessing.
         | 
         | Machine learning is educated guessing based on previously seen
         | data. As mentioned by others there are ways to do super
         | resolution that only uses the data available. I can't think of
         | any that can upscale a single image, although I have vague
         | memories of having seen something about using moire patterns to
         | infer the higher resolution texture of some features.
        
         | aktuel wrote:
         | There are serious use cases for super resolution in medical
         | imaging for example.
        
         | ska wrote:
         | One of those annoying things is that the name "super
         | resolution" stuck, here.
         | 
         | Originally super-resolution was a hardware technique, and not
         | "guessing". If you can [edit: this was poorly worded "control
         | an imager positioning"] control imaging with finer resolution
         | than the sensor has, you can take multiple images and
         | reconstruct a higher resolution image in a principled way for
         | say 2x resolution gain (cf super-resolution microscopy), also
         | some telescope systems. Some modern photographic systems
         | actually do this directly (piezo motors?) on the sensor.
         | 
         | Of course this only works if what you are imaging is reasonably
         | static over the time needed to take all the images.
         | 
         | You can do an approximate version of this with video, with
         | caveats because you don't control the motion. The key thing is,
         | though, you actually have more data to work with.
         | 
         | This idea ran in parallel with image processing people
         | attempting to estimate higher resolution from a single image
         | for a while, and unfortunately the terminology stuck in image
         | processing also. Something like resolution extrapolation is
         | probably better but that ship sailed ages ago.
        
           | anon_tor_12345 wrote:
           | Citation? I've read over 50 papers in MISR and SISR (I wrote
           | a lit review) and I have never seen mention of actual
           | hardware that would shift the imaging system; indeed such a
           | system would have been my dissertation topic if I hadn't
           | switched areas.
        
             | bobtail3 wrote:
             | Probably not exactly what he is talking about, but this
             | also sounds similar to dithering. Where with repeated
             | measurements and random noise you can statistically
             | estimate the value of a signal below the quantization
             | level.
        
               | Thrymr wrote:
               | Indeed, it includes things like the Drizzle algorithm
               | that has been used by Hubble space telescope astronomers
               | for a while: https://www.stsci.edu/ftp/science/hdf/combin
               | ation/drizzle.ht...
        
             | [deleted]
        
             | vlovich123 wrote:
             | You can also do it in software by just recording IMU with
             | high precision time stamps (a lot of camera sensors have a
             | gpio they can interrupt on even every scan line) and post-
             | processing. There's cool techniques where they can remove
             | various issues in rolling shutter through this technique to
             | get global-shutter like quality and removing camera-
             | movement induced motion blur. I haven't heard of it applied
             | to super res but I don't see why not. I think Google uses
             | similar techniques to implement their software HDR solution
             | which takes 3 back to back snapshots at different exposure
             | levels and merges them.
        
               | bayindirh wrote:
               | Newer Sony mirrorless cameras embed gyroscope data into
               | videos for further stabilization of the image when IBIS
               | is not enough
               | 
               | Theoretically, with 30FPS cameras like Sony A1 and said
               | gyroscope data, you can create super resolution images.
               | 
               | IIRC Olympus' some handheld super resolution modes use
               | both shake and sensor shift to increase resolution.
        
             | wongarsu wrote:
             | Synthetic aperture radar [1][2] uses this principle. In
             | that case the imaging system is fixed on a moving aircraft
             | or satellite.
             | 
             | I think in general satellite imaging is a good place to
             | look for such implementations, since they have a naturally
             | and predictably moving imaging system.
             | 
             | 1: https://en.wikipedia.org/wiki/Synthetic-aperture_radar
             | 
             | 2: https://www.youtube.com/watch?v=u2bUKEi9It4
        
               | mNovak wrote:
               | A major difference here is that SAR uses phase
               | information, whereas to my knowledge optical techniques
               | are not doing that.
        
               | anon_tor_12345 wrote:
               | SAR is similar but not the same since there you're super-
               | resolving in time rather than space. Also in that
               | instance it's just conventional MISR since you're not
               | driving the imaging system (more information is being
               | passively collected as the targets passes).
        
               | ska wrote:
               | SAR is similar to the video technique mentioned, agree
               | it's not quite the same but if underlying assumptions
               | hold still more estimate than "guess".
        
               | oivey wrote:
               | I think it's more useful to think in terms of how well
               | sampled the observations are relative to the size of the
               | output space. SISR is very undersampled, and MISR is
               | oversampled. SAR reconstruction techniques can fall in
               | either bucket.
        
             | CarVac wrote:
             | Certain Hasselblad cameras have had this for a long time,
             | and Pentax, Olympus, Panasonic, and Sony have it on various
             | models, using the sensor shift image stabilization to
             | implement it.
        
             | galago wrote:
             | I think what he's talking about is what Sony calls Pixel
             | Shift Multi Shooting. Other camera manufacturers do it too.
             | https://support.d-imaging.sony.co.jp/support/ilc/psms/ilce7
             | r...
        
               | anon_tor_12345 wrote:
               | Wow that is in fact exactly it. I'm actually quite
               | impressed they're able to shift the sensor array by
               | exactly one pixel (or even close to) since that's on the
               | order of microns.
        
               | RobLach wrote:
               | Some projector manufacturers are shifting a 4K DMD around
               | 4 times to increase the refresh rate of a 1080P image 4x.
        
               | [deleted]
        
               | azornathogron wrote:
               | Piezo actuators can make very fine movements. You can get
               | positioning systems that can adjust position by
               | nanometers, even sub-nm [1].
               | 
               | Also, cameras have had stabilization systems for a while
               | now; I would assume they need similar pixel-scale
               | precision. Some cameras shift the lens, some shift the
               | sensor, but either way they need to shift the image-on-
               | sensor by a very small amount, and also do it very
               | rapidly.
               | 
               | [1] For example: https://www.pro-
               | lite.co.uk/File/psj_piezoelectric_nanopositi...
        
               | Stratoscope wrote:
               | In fact the Olympus implementation of this feature moves
               | the sensor by _half_ a photosite diagonally. They do it
               | by re-purposing the existing in-body stabilization
               | mechanism to move the sensor around.
               | 
               | User 'twic' posted a link to a very interesting article
               | that describes this and also explains the difference
               | between photosites and pixels:
               | 
               | https://chriseyrewalker.com/the-hi-res-mode-of-the-
               | olympus-o...
        
               | brudgers wrote:
               | Pentax did it with the K70 back in 2016.
        
               | yfkar wrote:
               | They also did it earlier in 2015 with Pentax K-3 II. Also
               | Olympus OM-D E-M5 Mark II had a similar feature in 2015.
        
               | [deleted]
        
             | twic wrote:
             | I just bought a camera that does it, so it definitely
             | exists.
             | 
             | https://chriseyrewalker.com/the-hi-res-mode-of-the-
             | olympus-o...
        
               | Stratoscope wrote:
               | That's a really informative article; thanks for posting
               | it.
               | 
               | I also have an Olympus E-M1 MkII, but I haven't tried the
               | high resolution mode yet. You just gave me a TODO item!
        
             | sorenjan wrote:
             | Google uses it in Google camera. They're not shifting the
             | sensor themselves, but they take advantage of the camera
             | shake users introduce by taking handheld photos.
             | 
             | https://ai.googleblog.com/2018/10/see-better-and-further-
             | wit...
        
               | anon_tor_12345 wrote:
               | Yes I'm very familiar with this but this is just MISR
               | i.e. purely a software solution (Peyman Milanfar is one
               | of the original researchers associated with MISR).
               | Fortunately elsewhere in this thread hardware
               | implementations have been demonstrated.
        
               | sorenjan wrote:
               | There are hardware implementations referenced in the blog
               | post, and in the linked published paper.
               | 
               | > In the early 2000s, Farsiu et al. [2006] and Gotoh and
               | Okutomi [2004]formulated superresolution from arbitrary
               | motion as an optimization problem that would be
               | infeasible for interactive rates. Ben-Ezraet al. [2005]
               | created a jitter camera prototype to do super-resolution
               | using controlled subpixel detector shifts. This and other
               | works inspired some commercial cameras (e.g.,Sony
               | A6000,Pentax FF K1,Olympus OM-D E-M1orPanasonic Lumix
               | DC-G9) to adopt multi-frame techniques, using controlled
               | pixel shifting of the physical sensor. However, these
               | approaches require the use of a tripod or a static scene.
               | 
               | https://arxiv.org/pdf/1905.03277.pdf
        
             | gnopgnip wrote:
             | Pentax DLSR have this feature too. The same motors that are
             | used to control the sensor and prevent camera shake are
             | used to offset the sensor slightly while multiple images
             | are taken
        
             | shock-value wrote:
             | Most modern mirrorless cameras have such hardware (usually
             | denoted "in-body image stabilization"). The "repurposing"
             | of it to acquire multiple captures at various slight
             | offsets and combine them intelligently is maybe a bit more
             | recent, but still pretty common at this point.
             | 
             | https://en.wikipedia.org/wiki/Image_stabilization#Sensor-
             | shi...
             | 
             | https://support.d-imaging.sony.co.jp/support/ilc/psms/ilce7
             | r...
             | 
             | https://www.nikonimgsupport.com/na/NSG_article?articleNo=00
             | 0...
             | 
             | https://www.canon.co.uk/pro/stories/8-stops-image-
             | stabilizat...
        
               | avereveard wrote:
               | I think the technique was used a lot prior in astronomy,
               | where I encountered it the first time.
               | 
               | There is called stacking, and the subpixel offset comes
               | naturally from path distortion in the atmosphere itself.
               | 
               | I remember using registax when the first batch of
               | consumer telezoom point and click camera came out and got
               | a 30x long exposure of the moon, with, well, mediocre
               | results.
        
             | [deleted]
        
             | dev_tty01 wrote:
             | The iPhone 12 Pro physically shifts the sensor for image
             | stabilization, so precise positioning is possible.
        
           | Lio wrote:
           | I seem to remember this being done with the piezoelectric
           | vibrator ( _stop that sniggering at the back_ ) used to shake
           | the dust off sensors.[1]
           | 
           | I think Haselblad had that as well as Olympus but I can't
           | quite what the systems were called.
           | 
           | https://www.engadget.com/2018-01-18-hasselblad-h6d-400c-medi.
           | ..
        
         | Datenstrom wrote:
         | Unfortunately someone is going to wrap up super-resolution for
         | critical tasks and sell it likely causing many people harm or
         | at least inconvenience. I have already tried to talk some
         | companies out of using it for police/surveillance type work.
         | People who do not understand the technology are determined to
         | use it and someone is going to.
        
         | savant_penguin wrote:
         | What I find interesting about that is that after seeing the
         | face in the super-resolution image you can kinda see it in the
         | original
        
         | tshaddox wrote:
         | Is there any proof that Ryan Gosling's face (or perhaps a
         | photograph of Ryan Gosling's face) was in fact _not_ there when
         | the original photo was taken? :)
        
         | nojokes wrote:
         | For me super resolution mean a combination of multiple lower
         | resolution images to gain additional information and from that
         | higher resolution.
         | 
         | It is especially something that some cameras can do by
         | deliberately doing sensor shifting.
         | 
         | They should not call it super resolution or at best emulated
         | super resolution or artificial super resolution.
        
         | jonplackett wrote:
         | Every time someone brings up enhancing images I like to pull up
         | this Red Dwarf clip:
         | 
         | https://www.youtube.com/watch?v=2aINa6tg3fo
        
           | 14 wrote:
           | Such a good show and highly underrated and unknown by so
           | many. What a great clip.
        
             | jonplackett wrote:
             | I love the gag at the end too.
             | 
             | "Wouldn't it have been easier to just look them up in the
             | phone book?"
             | 
             | Pure genius
        
           | opo wrote:
           | That is great. I like to bring up this clip from Castle:
           | 
           | https://www.youtube.com/watch?v=PaMdXjTn9rc
        
             | drewzero1 wrote:
             | I really loved the way Castle played around with typical
             | police drama tropes, though I got a little confused about
             | what the show was trying to be/do in later seasons.
        
               | jonplackett wrote:
               | Haha. I'd never even heard of this show. Will be checking
               | it out.
        
         | rom1v wrote:
         | Or https://twitter.com/Chicken3gg/status/1274314622447820801
        
         | ravi-delia wrote:
         | I feel like it's also ok for critical tasks if you're willing
         | to accept that it isn't perfect. If all you have is a grainy
         | photo, you'll only be able to make guesses yourself; why not
         | have a superhuman guess too? (Because the people putting it to
         | use would be morons about it, I know, let me dream)
        
         | danShumway wrote:
         | I wish Adobe would use a different name for this that made it
         | more obvious what was happening, something like "detail fill"
         | or "detail interpolation".
         | 
         | I worry this is going to be a case where the marketing is at
         | direct odds with public education efforts.
        
         | cycrutchfield wrote:
         | Nobody has billed it as anything more than just guessing. In
         | the literature, it is frequently mentioned as "perceptually
         | plausible" upscaling.
        
         | oivey wrote:
         | Guessing I think is too strong of a way to phrase what super
         | resolution does. The broader concept of, for example,
         | regularized solving of inverse problems is used widely in
         | things like CT and MRI where the reconstructed imagery is used
         | for analysis. The regularization is effectively the part you're
         | saying is guessing, but I would phrase it as enforcing
         | assumptions about the data. Neural network-based approaches are
         | similarly learning the distribution of the output data.
        
         | greggturkington wrote:
         | Is this single example of someone using different software (by
         | Topaz Labs) relevant to this article specifically? Or just
         | every article about enhancement?
        
         | kthartic wrote:
         | From the linked article:
         | 
         | >you may want to uncheck detect faces... unless you want Ryan
         | Gosling popping up all over the place.
         | 
         | Sooo not really a case against super-resolution, just a funny
         | result of having used the wrong settings
        
       | sctgrhm wrote:
       | It seems like "Enhance!" is now an actual thing
       | https://www.youtube.com/watch?v=Vxq9yj2pVWk
        
       | fastball wrote:
       | I just wish Adobe CC wasn't the buggiest piece of software I've
       | _ever_ used.
       | 
       | I've had a number of issues over the years, but my current issue
       | is that when I try to open CC the interface elements all freeze
       | and are unclickable (even though the window is still scrollable -
       | very strange behavior). So I went to uninstall it, but I can't
       | because Photoshop is installed. So I went to uninstall Photoshop,
       | but you guessed it, I can only uninstall PS through CC, which is
       | unresponsive.
       | 
       | Smh.
        
         | judge2020 wrote:
         | They have a tool specifically crafted to remove their buggy
         | software:
         | 
         | https://helpx.adobe.com/creative-cloud/kb/cc-cleaner-tool-in...
         | 
         | Note that this isn't the same as uninstalling everything via
         | the official process since this leaves behind stuff like
         | Adobe's Genuine client which verifies you're not using pirated
         | software.
        
           | fastball wrote:
           | Nice, thanks for the tip, will give it a go!
        
         | bogwog wrote:
         | People are doing this to themselves. Stop buying their shitty
         | software, and see how quickly they start to fix it.
         | 
         | I don't understand why people are so obsessed with Adobe, since
         | their software nowadays isn't that good. There are tons of
         | alternatives out there that work better and do the same thing,
         | if not more.
         | 
         | Is it just laziness/reluctance to learn something new?
        
           | Tagbert wrote:
           | A large part of it is that commercial use involves sharing
           | files with other people. It's very hard to get constant
           | results if you are not using the same software. The file
           | formats are mostly propriety and complex. The effects applied
           | are dynamic and probably also propriety. It's just a mess, so
           | everyone just puts up with it and continues to use "the
           | standard" software.
           | 
           | If you are just doing graphic work yourself, there are other
           | applications like Affinity that can work, as long as you are
           | not collaborating.
        
           | Miraste wrote:
           | Some of Adobe's other products have superior competition, but
           | Photoshop is the flagship and despite its stable of bugs I
           | don't think there's anything better. If you have alternatives
           | to suggest I'm interested (I really hate Adobe). Ones I'm
           | aware of:
           | 
           | Affinity Photo. Much more stable than Photoshop and I like it
           | a lot, but there are things Photoshop does that Affinity
           | doesn't and I can't think of anything that goes the other
           | way.
           | 
           | Krita. Fairly sleek, especially for OSS. Becoming very
           | competitive for digital illustration, but not (and not
           | intended to be) a great photo editor.
           | 
           | paint.net. Good for fast edits but simplified, not a true
           | competitor.
           | 
           | GIMP. Ancient, slow, ugly, clunky, severely lacking in
           | features, and somehow even less stable than Photoshop.
        
         | medicineman wrote:
         | You have to use the Adobe CC removal tool. Done that about 4
         | times this last year. Real PITA.
        
         | virtualritz wrote:
         | It's not only Creative Cloud itself but all the newer versions
         | of their apps I recently used.
         | 
         | I needed to update my CV recently and expected to spend 1h in
         | InDesign. I spent 6h in the end.
         | 
         | - InDesign crashes while saving and destroys my document. 1h
         | lost.
         | 
         | - InDesign crashes while exporting a PDF of my document (9
         | pages). I hadn't saved. 1h lost.
         | 
         | - InDesign crashes (reproducible) when adding/inserting a page
         | (mind you, that's page 10). First time this happened I hadn't
         | saved for half an hour. I was really considering changing the
         | text because I couldn't solve this. Then I found [1]. Quote:
         | 
         | > [...] after speaking to Adobe chat help, they asked me to
         | send my file to them. They sent it back to me and everything
         | went back to normal. [...] "File was corrupted , we recovered
         | it by using scripts and then saved as IDML."
         | 
         | - Because of the above I had the idea of exporting to IDML. Re-
         | importing then allowed me to add the page but I had subtle
         | formatting errors where the last character before a tab or a
         | newline on lines that had the font changed via a character
         | style had the wrong style. Fixing this: 1h.
         | 
         | - When I re-arranged parts of the CV via copy & paste entire
         | sections I copied lost the small caps/italic styles they had
         | assigned (acronyms/names). Going through the entire document to
         | fix this: 1.5h.
         | 
         | I should have known better. Less than two years ago I helped a
         | friend do a snail mail mass mailing where we used a CSV file
         | with addresses to create hundreds of (two page) letters. All in
         | InDesign. Everything worked until we tried to export as PDF,
         | for printing. The solution was to export as 'interactive' PDF
         | and only export about ~100 pages at a time.
         | 
         | I bought Affinity Publisher already when the thing with the
         | letters happened. But I naively believed updating my CV would
         | be quick in InDesign.
         | 
         | In retrospect typesetting the CV from scratch in Publisher
         | would have been the better choice.
         | 
         | Last week I helped a friend with a commercial that was mostly
         | 3D and some motion graphics done in After Effects (Ae). We
         | couldn't get it to render in After Effects 2019. It would run
         | out memory and then just not render the frame or crash. In the
         | end we exported the project for an older version and went back
         | to an Ae CC version from six years before. That worked without
         | any issues.
         | 
         | All this is just shocking. I used InDesign from 1.0 and it was
         | not that bad, a decade ago. Ae ... the same. See above.
         | 
         | As of a recent update, Acrobat Reader (free version) refuses to
         | let me open any document w/o signing into CC first. Another
         | wtf.
         | 
         | What a friend of mine replied when he heard about my InDesign
         | adventure:
         | 
         | > I'm on CS6 for anything Adobe. Just junk now.
         | 
         | [1] https://community.adobe.com/t5/indesign/indesign-crashes-
         | whe...
        
         | taisalie wrote:
         | Right? Now prove you're not GPT-3 keying on the domain name.
        
       | ACAVJW4H wrote:
       | Unmesh from Piximperfect did a nice review and comparison
       | https://www.youtube.com/watch?v=cfTbrJP5TXs
        
       | Faaak wrote:
       | fast forward 10 years, and there's a jury for a supposed crime.
       | The only proof is an old and grainy picture taken of the suspect
       | from far away.
       | 
       | And then, the jury decides to use "Super resolution" to "enhance"
       | the picture. The ML model decided that what it saw was a gun
       | instead of a rose.
        
       ___________________________________________________________________
       (page generated 2021-03-18 23:00 UTC)