[HN Gopher] Many AI researchers think fakes will become undetect...
       ___________________________________________________________________
        
       Many AI researchers think fakes will become undetectable
        
       Author : marban
       Score  : 71 points
       Date   : 2024-01-20 18:54 UTC (4 hours ago)
        
 (HTM) web link (www.economist.com)
 (TXT) w3m dump (www.economist.com)
        
       | neonate wrote:
       | http://web.archive.org/web/20240120190542/https://www.econom...
        
       | ang_cire wrote:
       | I would imagine that models could be trained on specific image
       | generators to detect them. This is honestly the obvious solution;
       | AI is actually _good_ at finding very subtle patterns.
        
         | hiddencost wrote:
         | Adversarial setting. People will train models on adversarial
         | detectors to hide from them.
         | 
         | The comments in this article have deeply studied the method
         | you're suggesting.
        
           | ang_cire wrote:
           | Which is pointless in targeted models like I'm talking about.
           | If I train a model that is *only* trying to detect e.g.
           | stable-diffusion-2-1, training an adversarial model is
           | pointless, because then you're just using a different model
           | anyways. You would be spending a lot of training time when
           | you could just use another already-extant model.
           | 
           | State actors, for example, don't need to use public models
           | for their generators anyways, so it's not like non-state
           | actors would have true-positive images to train models on to
           | counter said actors' models.
        
         | agumonkey wrote:
         | wouldn't this feed a race to generators being even closer to
         | indistinguishable ?
        
           | ang_cire wrote:
           | Potentially, but I don't think in practice, because most
           | image generation is not _trying_ to be photorealistic. Most
           | models are trying to stylize their outputs, either by default
           | (e.g. a Miyazaki-anime-like model), or in response to the
           | prompt.
           | 
           | Using AI image generation for misinformation is by far the
           | vast minority of its use.
           | 
           | And if the output is stylized, it's going to be much more
           | obvious if it has come from an AI model, because a human is
           | going to have a much harder time reproducing a specific ML
           | model's style on-demand (e.g. if their art teacher asks them
           | to sketch a face in the same style, to prove their homework
           | wasn't faked).
        
         | jvanderbot wrote:
         | The problem is this isn't like cryptography where you can prove
         | something. You can train a detector on a generator, or at least
         | what you think the generator is, but you can never know for
         | sure. And any "likely fake" assessment can just be refuted by a
         | sock puppet that vouches for the content.
        
         | willsmith72 wrote:
         | This is the main discussion point of the article
         | 
         | > Just as machines can be trained to reliably identify cats, or
         | cancerous tumours on medical scans, they can also be trained to
         | differentiate between real images and ai-generated ones.
         | 
         | But the conclusion the researchers made was detection at some
         | point wont be reliable and fakes will win
        
           | ang_cire wrote:
           | The tools the article discusses are *general* AI-detection,
           | meant to be fed any image and determine whether it was AI-
           | generated.
           | 
           | I'm talking about a detection model that only detects images
           | produced by one generator/source model, so rather than a one-
           | to-many process of having one tool check an image for many
           | potential source generators' artifacts, you'd have many
           | individually-trained detection models, each being run against
           | a single image/text block.
        
       | agumonkey wrote:
       | Some people raise question on what it implies in society at
       | large. Anything digital is now untrustworthy ?
        
         | figassis wrote:
         | I think it's always been, we just knew that the effort needed
         | to fake many things requires some degree of skill available to
         | a few and was often expensive. But perfect fakes have always
         | existed. This is why artists usually sign their work?
         | 
         | What we will need today is to start signing things digitally in
         | the same way, which I think will require very good,
         | decentralized pki, unlike what we have with email.
         | 
         | Essentially, different pki should work like dns, and find the
         | pki for someone, and cache it., so that your signature works in
         | china and the moon. Updating it should work the same way, the
         | old one should always work and new ones should have the old pub
         | keys appended to them so any public key can be used to verify
         | all past signatures.
        
         | hammyhavoc wrote:
         | Anything is inherently untrustworthy, hence "zero trust" as a
         | concept. You can't even trust physical things. Counterfeiting
         | has been a thing since at least 400 BC with fake Greek coins.
        
       | js8 wrote:
       | We already have undetectable fakes. Anybody can buy a blue check
       | mark on Twitter and pretend to be somebody else. Anybody can also
       | domain squat.
       | 
       | The way we detect them is if this becomes a problem, the
       | authentic entity will speak against it and the society will
       | rectify. Just as these fakes are a non-problem in practice, AI
       | fakes will be a non-problem too.
        
         | gumballindie wrote:
         | > the authentic entity will speak against it and the society
         | will rectify
         | 
         | Society is already speaking against stolen content and fakes
         | produced using it. How are we rectifying this?
        
         | big_whack wrote:
         | Much cruder fakes than you are describing are a huge problem in
         | practice, costing billions per year in fraud to Americans
         | alone. Just because you can detect them, doesn't mean your
         | grandma can.
         | 
         | The societal implications of undetectable fakes are off the
         | charts.
        
           | thimkerbell wrote:
           | What does it mean that next to nobody is saying this? Are the
           | ones who know, voting with their feet? And where is that?
        
             | hammyhavoc wrote:
             | People _are_ saying this. You 're on HN though, the
             | demographics here can tend to skew money-grubbing techbros
             | with who is the loudest and most defensive of anything
             | they've got money riding on. Remember the crypto and NFT
             | hype on HN? The bottom-line is only what matters to a good
             | chunk of people here.
        
         | wmeredith wrote:
         | Calling the current fakes a "non-problem" is one of the hottest
         | takes I've heard in some time.
        
         | vasco wrote:
         | It's not exactly the same though. Attacks from spam networks
         | out of extradition reach will become more sophisticated, and
         | that's going to take more money or more annoyance to slow down.
         | The same way phone call scamming from India is a billion dollar
         | industry, fake ads impersonating famous people and things like
         | that will have near zero cost to produce which means it's cheap
         | to mass exploit.
        
         | figassis wrote:
         | Problem with this for example is when LE uses it to incriminate
         | you, and forces you to prove that their evidence is fake. We
         | can see from the Horizon scandal that when they want to keep
         | you from defending yourself, they usually succeed.
        
       | CuriouslyC wrote:
       | I wouldn't be surprised if the real way we reached AGI was that
       | the cat and mouse game between people trying to detect AI and
       | people trying to sneak AI output past detection. That is
       | essentially a hyper-analytic version of the Turing test - if
       | humanity as a whole cannot discern human output from AI output
       | given all the tools of science at its disposal, then we have
       | failed to prove that it isn't intelligent, which is really more
       | of a scientific question formulation anyhow.
       | 
       | If you think about it, it's sort of like a real life GAN
       | algorithm.
        
       | t_mann wrote:
       | Assuming it'll be true, I'm not sure it has to be just a bad
       | thing. We got used to treating videos/photos as the kind of
       | objective evidence that they never were. It's not hard for an
       | image to be both factually correct/unaltered, and yet giving a
       | distorted view of reality. Cropping out parts, leaving out some
       | bigger, non-visible contexts, such distortions can be really easy
       | to miss, too. How faithful an image is depends a great deal on
       | how it was put together and presented. So what we should really
       | be asking ourselves when we see images is how much we trust the
       | sources. A proliferation of AI fakes could make the need for such
       | reflection more obvious.
        
         | treprinum wrote:
         | WW3 or some other high-intensity conflict could be potentially
         | launched by highly coordinated deepfakes. Imagine creating some
         | videos offending an aggressive religion with many combatants or
         | generating event videos that didn't happen and managing
         | narrative as the uproar progresses etc.
        
           | jjj123 wrote:
           | None of that strikes me as meaningfully different than
           | regular old propaganda.
        
             | pixl97 wrote:
             | Quantity and speed are qualities.
             | 
             | Getting hit with a snowball could hurt you but is most apt
             | to annoy you. Getting hit with an avalanche is apt to kill
             | you. But hey, they are both snow, right?
        
               | redcobra762 wrote:
               | However, it's unclear what plays the role of mass in this
               | analogy.
        
               | Loughla wrote:
               | When it's one video of an angry mob killing bystanders,
               | it's easy to dismiss.
               | 
               | When it's hundreds, from different viewpoints, it's much
               | harder.
        
             | baq wrote:
             | Computers are just fast calculators, right?
             | 
             | Cars are just faster horses, right?
             | 
             | Ships are just bigger canoes, right?
             | 
             | Photographs are just quicker made paintings, right?
             | 
             | Or... perhaps there is something different which makes
             | practical what hasn't been practical before? AI propaganda
             | is the car of the propaganda horse of days past.
        
               | marcellus23 wrote:
               | For all of human history, the only way to know how an
               | event happened is to either have witnessed it yourself,
               | or heard from someone else. It's only the last 100 years
               | or so that we've been able to record exactly what
               | happened as audio, video, or photography. And still, it's
               | possible to doctor any of these. That's not to mention
               | that you can craft excellent propaganda even by using
               | this "real" recorded media, just by cherrypicking and
               | presenting it in a certain context. I don't think AI
               | generated media is going to a massive sea change in
               | misinformation.
               | 
               | edit: downvoters should reply instead
        
           | t_mann wrote:
           | I'm not denying that and I am concerned about that
           | possibility. The thing is, I don't think you need deep or any
           | fakes at all for that. What it takes are people willing to
           | believe lies and consume hatred (fwiw, WWII was based on huge
           | lies that didn't need much forgery). The more people are
           | questioning hatred they're being fed, the better of a chance
           | we have.
        
             | baq wrote:
             | People believing lies or not isn't the problem. The problem
             | is that lie to truth ratio, admittedly not low enough even
             | today, will rapidly approach 1. Finding anything
             | trustworthy will be impractical, so people will stop
             | trying.
             | 
             | This is very scary. Being able to tell what's true
             | underpins western democracies. With everyone having
             | plausible deniability from everything we're going to have
             | to... change things.
        
               | RandomLensman wrote:
               | People to people information from direct witnesses will
               | go up in value because faking physical environments is
               | expensive, while untrusted or unverified sources will
               | become pretty much useless. In person settings will
               | matter more again - back to older models in a way.
        
               | t_mann wrote:
               | I don't think that needs to be true, though. Let's go
               | with the UK since the article mentions the British PM,
               | there are trustable sources like BBC and other media
               | whose journalists can verify events on scene can work
               | with trusted photographers, have contacts with whom to
               | cross-check claims,...
               | 
               | Yeah, some hoaxes will make it through the process, but
               | some will be rectified later, and I don't see the ratio
               | approaching 1. And those media are not hard to find.
               | Taking the example from the article, there's no sustained
               | uncertainty about whether the PM endorses a get-rich-
               | quick scheme, and I don't see that coming, no matter how
               | good the fakes become.
        
               | ejb999 wrote:
               | >> trustable sources like BBC and other media whose
               | journalists can verify events on scene can work with
               | trusted photographers, have contacts with whom to cross-
               | check claims
               | 
               | They _could_ , but do they? Evidence says no - they are
               | perfectly happen to run with lies if it fits their
               | narrative - and then weeks later, tucked away someplace,
               | issue a retraction - after the damage is done.
               | 
               | The BBC has an agenda, FOX news has an agenda, CNN has an
               | agenda, MSNBC has an agenda, PBS has an agenda - they are
               | all perfectly willing to lie to gather clicks or eyeballs
               | - not sure AI is going to meaningfully change anything in
               | that regard.
        
               | RandomLensman wrote:
               | Having an agenda isn't necessarily bad in and of itself.
               | Wanting to maximize eyeballs or clicks is an incentive
               | problem that can be addressed.
        
           | overvale wrote:
           | Ever seen the movie Wag the Dog?
        
           | wolverine876 wrote:
           | Lots of wars started that way before AI: Spanish-American
           | ('remember the Maine!'), Vietnam (Gulf of Tonkin), Iraq
           | (nuclear weapons program), etc.
           | 
           | AI would seem to make it worse. I think it will also get
           | worse because warmongers can automate propaganda and, with
           | social media, reach people directly.
        
           | onthecanposting wrote:
           | I think misinformation is already out there. I can only guess
           | what extent to which common knowledge on world events is
           | false, but it's not zero. There is an interesting, though not
           | that academically rigorous body of internet analysis on
           | things like Pallywood and the Charles Jaco bluescreen video
           | from the Gulf War that are a good starting point.
        
         | pixl97 wrote:
         | >So what we should really be asking ourselves when we see
         | images is how much we trust the sources.
         | 
         | I mean, that's what you want as a best outcome, unfortunately I
         | don't see that is going to be the outcome any time soon.
         | 
         | https://en.wikipedia.org/wiki/Hyperreality
         | 
         | Further embracing of the hyperreal by society at large, coupled
         | with authoritarian governments using manipulation tactics like
         | the firehose of falsehood will continue to erode trust in
         | social institutions and disengagement by the masses.
        
         | Certhas wrote:
         | Two issues with that:
         | 
         | I) There will be a "danger window" where the fact that audio-
         | visual evidence is meaningless has not yet fully registered
         | with too many people.
         | 
         | II) It's perfectly possible that audio-visual lies are just
         | inherently more convincing to the human psyche than written or
         | spoken lies. We know instinctively that we can not trust what
         | others say blindly, but we do instinctively trust our own eyes
         | and ears. Note the etymology there: Blind trust is trust
         | without seeing. When you see you don't need trust anymore.
        
           | stouset wrote:
           | For a classic example of this, our justice system tends to--
           | in general--treat witness testimony as highly reliable. After
           | all, the person was _there_. They _saw_ it with their _own
           | eyes_.
           | 
           | Of course we all know rationally that witness testimony is
           | generally terrible. And yet we don't actually seem to care
           | about that when push comes to shove.
        
             | dredmorbius wrote:
             | The "science" behind much criminal forensics is often
             | laughably absurd. Even where there is valid _theoretical_
             | foundation, in many cases the _practice_ voids validity of
             | much of the results.
             | 
             | And that's before we consider the psychology of witnesses,
             | criminals, juries, judges, law enforcement, etc.
        
           | unholythree wrote:
           | I think your right, the tricky part is always navigating
           | these technological changes.
           | 
           | If you imagine life in colonies here around 1776, there was
           | no photographs to doctor, no Zapruder film, or Nixon tapes,
           | and yet newspapers existed and people lived in a relatively
           | high trust society. People generally had faith in contracts,
           | the law, and the government; but not a lot of ways, as an
           | individual, to verify much at all.
           | 
           | I think eventually we'll have to go back to that model of
           | simply picking individuals and institutions to trust.
           | Hopefully it'll make people more selective.
        
             | cudgy wrote:
             | > People generally had faith in contracts, the law, and the
             | government; but not a lot of ways, as an individual, to
             | verify much at all.
             | 
             | How do you know this was the case in 1776? Wasn't there a
             | revolution around that issue?
        
         | turadg wrote:
         | Agree. It will be like text, which has always been trivial to
         | conjure or alter.
         | 
         | It will be a blip in history that that there was a period in
         | which producing a particular image required its contents to
         | have existed in reality.
         | 
         | What we need going forward is a way to know provenance, where
         | an image came from and what edits have been made. Then people
         | can trust particular sources, as they do with text.
         | 
         | The Content Authenticity Initiative is working on that.
         | https://contentauthenticity.org
         | 
         | This use case is a compelling use for blockchains
         | https://research.adobe.com/news/content-authenticity-and-ima...
        
           | mistrial9 wrote:
           | yes agree - this was also discussed ten years ago in some
           | math circles. So, trying to agree but note that these are
           | problems that different projects have taken on over a long
           | time, also.
        
         | powersnail wrote:
         | > It's not hard for an image to be both factually
         | correct/unaltered, and yet giving a distorted view of reality.
         | Cropping out parts, leaving out some bigger, non-visible
         | contexts, such distortions can be really easy to miss, too.
         | 
         | In some contexts, that's true, especially if the taker has full
         | autonomy on how the photo/video is taken, and how the story is
         | told.
         | 
         | But there are other contexts (fixed cameras like CCTV,
         | videotaped performances for auditions, a footage that includes
         | the complete process of a car crash, etc.) in which this is not
         | a thing. Faking it would require editing visible content, which
         | is made way easier than before by machine learning, and
         | ultimately reduced their value as evidence.
         | 
         | I'm not sure I particularly want to go back to the time when
         | evidence like these didn't exist.
        
         | hammyhavoc wrote:
         | When teenagers are making sexually explicit deepfakes of
         | classmates, that's objectively a bad thing, because it is
         | extremely damaging. People are using it to blackmail and extort
         | too. Consent, ethics and law should be at the forefront of the
         | conversation regarding AI, not whether we trust the contents of
         | an image.
        
           | echoangle wrote:
           | But if everyone knows AI exists, the negative points kind of
           | go away, because that also gives everyone plausible
           | deniability. How are you going to blackmail me with sexual
           | images when everyone knows that AI can easily create that? I
           | think that actually makes the AI objectively a good thing,
           | because it prevents blackmail with real images, too
        
             | hammyhavoc wrote:
             | No, the negative points don't go away whatsoever.
             | Normalizing the sexualization of minors is never not going
             | to be a problem.
             | 
             | If you take a photo of a minor and then use that photo to
             | generate CSAM, even if it is "fake", that's not in any way
             | OK on numerous levels. There's the legal, moral and ethical
             | aspects of it, and then there's the issue of consent.
             | 
             | These things are damaging by their very existence. Can you
             | imagine the bullying and rumours that are possible now
             | because of technology like this being so readily available?
             | It doesn't matter if it's real or not. People used to
             | crappily MS Paint/Photoshop heads of female minors onto
             | "bikini babes" and pornstars when I was at high school,
             | that didn't make it OK, even if it didn't look real in any
             | way whatsoever.
             | 
             | It doesn't _prevent_ anything, quite the contrary, it
             | enables _everything_. Look at the proliferation of fake
             | news already in the past few years, people know they
             | shouldn 't believe everything they read on the internet,
             | but people parrot internet fake news constantly. People act
             | on it. People get killed over it. People kill themselves
             | over it.
        
       | seydor wrote:
       | As long as the detectors are other AIs, that's pretty self-
       | evident
        
       | scrps wrote:
       | Insane idea: generate a ton of fakes of yourself, some subtle,
       | some obvious... Which mirror in the forest or mirrors is actually
       | you?
        
         | hammyhavoc wrote:
         | The one caught on CCTV hundreds of times per day for years on
         | end with your gait analyzed.
        
       | spydum wrote:
       | Won't part of the answer be digital signatures matched to the
       | media, signed by the users real identity?
       | 
       | I have wondered if this would happen to CCTV / Ring / security
       | cams for a while, but yet to have seen it.
        
         | hammyhavoc wrote:
         | Signing something isn't proof that it's legitimate. Signing
         | something is just proof that you control the means to be able
         | to sign something cryptographically, hence the problem of
         | compromised keys.
         | 
         | DMA attacks anyone?
        
       | solardev wrote:
       | Isn't this the whole point? When did the Turing test become a
       | negative..?
        
         | persnickety wrote:
         | Ever since humans started seeing competition to their
         | abilities.
        
         | password54321 wrote:
         | Turing test was a thought experiment. I don't know why people
         | keep holding it up as some sort of holy grail for testing
         | intelligence.
        
           | solardev wrote:
           | It's not some holy grail, it's just one way to think about
           | AI. Regardless of the test used, it's just sad to me that
           | popular views of AI now tend to be more Skynet and HAL than
           | Transcendence or Her.
           | 
           | Guess a few decades of big tech enshittification will do that
           | to the psyche, lol.
        
       | ndjshe3838 wrote:
       | Would it be possible for cameras to have a hardware private key?
       | 
       | And then sign the images in hardware so that you can prove an
       | image came from a specific camera and hasn't been altered?
        
         | pixl97 wrote:
         | Sounds risky if you're taking pictures of stuff the government
         | doesn't want you to.
        
           | joshuanapoli wrote:
           | A photo signed as an authentic iPhone image wouldn't
           | completely pinpoint the photographer, but would be good
           | evidence that the image is authentic.
        
             | pixl97 wrote:
             | Over any period of time, no probably not. If the algorithm
             | is simple as "This was signed by an iphone" then you have a
             | fleet of a billion+ iphones that all must remain secure and
             | unbroken for the entire time this fleet of phones exist
             | trends to unlikely. Given time and motivation all systems
             | are broken.
             | 
             | A much harder to break system is one uses a secret only
             | exists on each individual phone + the generic algorithm
             | that iphones use, but this comes back to individuals being
             | traced.
        
               | JanisErdmanis wrote:
               | > uses a secret only exists on each individual phone
               | 
               | There is actually a way to prevent individuals from being
               | traced while having a strong membership proof. This can
               | be achieved by issuing signatures on a relative generator
               | created in an exponentiation mix together with a set of
               | output pseudonyms. One can picture it as strands braided
               | together where it is hard to trace the link between the
               | input and output strands, while it is very easy to tell
               | that such a link exists.
               | 
               | More on that can be found on:
               | 
               | https://github.com/PeaceFounder/ShuffleProofs.jl
        
           | didntcheck wrote:
           | I doubt they'd do it via this system. It should be easy to
           | strip this metadata, in fact that is essentially a
           | requirement to make the photo readable on image viewers
           | unaware of it
           | 
           | However if it came out that manufacturers were secretly
           | storing a "fingerprint" of the noise profile of each image
           | sensor and sharing it with the government, I'm afraid I
           | wouldn't be entirely surprised
           | 
           | Remember
           | https://en.wikipedia.org/wiki/Machine_Identification_Code
        
         | cwkoss wrote:
         | Not really. It is inevitable such a system would lead to people
         | spoofing the sensor to get fake images signed.
        
           | joshuanapoli wrote:
           | It's not so easy to attack secure hardware. It won't be an
           | option for the vast majority of deep fake producers.
        
             | ejb999 wrote:
             | it is very easy to do - point your camera and record a
             | video that is fake - bingo, the video of the video is
             | signed as being real.
        
           | painted-now wrote:
           | I think the solution to this could be the "web of trust" - as
           | used with PGP. Everyone can sign their images with their
           | private keys. Whether you think those images are trustworthy
           | then depends on who else trusts that. For example, you could
           | have some newspapers verify/trust their journalists camera's.
           | And you could trust your friends camera's etc.
        
         | JanisErdmanis wrote:
         | It will be interesting to see what solutions people will come
         | up with to support cropping images and videos. Perhaps one can
         | already do that with Merkle tree inclusion and consistency
         | proofs. Definitely good times ahead for doing cryptography.
        
           | koliber wrote:
           | You could crop the picture and the result would no longer be
           | singed. However, if someone wanted proof that the cropped
           | picture is genuine, you could offer the uncropped signed
           | version and do a pixel by pixel comparison of the cropped
           | area. Kind of like a chain of provenance to a signed source.
           | 
           | This could work for other manipulation. Change the colors in
           | the image. Then if someone wants proof, offer the original
           | and describe the transformation.
        
             | JanisErdmanis wrote:
             | Sure. But what if the cropped part is sensitive content? Or
             | if the video is exceedingly long? Then, providing a full
             | source could become prohibitive.
        
         | nprateem wrote:
         | IIRC there was a post here a while ago about I think Canon or
         | Nikon working on that. But unless there's a chain of trust or
         | journalists upload RAW files, they'd basically be untrustable.
        
         | baobun wrote:
         | This has already been a thing for quite a while and all the
         | major camera manufacturers have or are working on some solution
         | for cryptographic provenance.
         | 
         | https://www.dpreview.com/news/9855773515/sony-associated-pre...
        
           | tenebrisalietum wrote:
           | Betting it'll be something like Cinavia, but for images and
           | where the embedded data is a key instead of just a simple
           | 1,2,3,4 code. With high color depths, such as greater than 8
           | bit, there's probably enough "dataspace" to embed data
           | somehow (e.g. imagine embedding a data stream in a band above
           | 20KHz for audio, it can be part of a 22KHz WAV file, for
           | example, but won't be heard at all). The encoding wouldn't
           | survive the situation where a picture is taken of a picture
           | at a lower color depth, but that's probably OK.
        
         | whyenot wrote:
         | Yes, but it seems like clever people would not have trouble
         | bypassing this. Just take a picture of a picture.
        
           | Epa095 wrote:
           | Thats not really bypassing it. The intention of the digital
           | signature is "I trust John at AP, and this picture has a
           | digital signature stating that this photo is taken by him, so
           | I trust that its valid". You taking a photo of a photo just
           | means that we now have a photo with your signature in it, and
           | I never trusted you in the first place. To bypass it you have
           | to trick John into faking a picture, or steal his cammera.
        
             | sureglymop wrote:
             | That's true but then why does it need to be a hardware
             | backed key as part of the camera?
             | 
             | It will be people who need to learn how to sign and encrypt
             | and manage their keys.
        
               | ndjshe3838 wrote:
               | Because that proves you didn't photoshop it after taking
               | it or sign an AI generated image
        
             | didntcheck wrote:
             | There's no need for that to be done on the camera then.
             | John's photo manager can just add a PGP signature from his
             | laptop. Except in practice this isn't needed. The fact that
             | it was posted by John on the AP website is enough proof
        
         | cedws wrote:
         | Won't work. With enough resources it's always possible to get
         | secrets out of hardware. Once compromised then every video and
         | photo ever taken by that camera model can no longer be trusted
         | to be authentic on signature alone.
         | 
         | Photo/video signing in hardware might actually make things
         | worse by creating a false sense of security. It's better for
         | people to question everything they see rather than trust
         | everything they see.
        
         | geor9e wrote:
         | It's already implemented in a limited number of cameras
         | recently.
         | 
         | A hardware key proves which camera it came from, and a
         | blockchain hash proves at least when the camera first connected
         | to the internet after taking the photo. So, if anyone tries to
         | claim your photo as their own, you need only point to the
         | blockchain to prove it's actually yours. It also proves you
         | didn't use any beauty filters outside the camera hardware
         | security bubble.
         | 
         | https://news.adobe.com/news/news-details/2022/Adobe-Partners...
         | 
         | I predict it's going to go mainstream in a new iPhone model
         | eventually. Consumers will become sold on the idea of
         | authenticity. They are sick of fakes and filters, but didn't
         | yet have any way to prove something is real.
        
         | didntcheck wrote:
         | An initiative exists:
         | https://en.wikipedia.org/wiki/Content_Authenticity_Initiativ...
         | 
         | But this is essentially the same problem statement as DRM:
         | provide the user with some cryptographic material, yet limit
         | how they can use it and prevent them from just dumping it.
         | Logically, there will always be a hole. Practically, you can
         | make it very annoying to do, beyond the ability of the average
         | consumer [1], but someone will probably manage, and given the
         | scenarios we often talk about (e.g. state-backed
         | disinformation), there will be plenty of resources spent on
         | doing that. The payoff will cover the cost
         | 
         | Paradoxically, one could argue that a "95% trustworthy" system
         | could actually do net harm. The higher people's trust builds in
         | it, the greater the fall damage if someone manages to secretly
         | subvert it, and use it on just the right bit of disinfo at the
         | right time (contrasting my footnote about DRM)
         | 
         | [1] Hence why claims that DRM is a complete failure miss the
         | point a bit - it's not needed to stop _everyone_. Perhaps _we_
         | can crack some given DRM system, but the fact that you even
         | have to download a new program is enough to stop a massive
         | amount of consumers from bothering
        
       | joshuanapoli wrote:
       | Digital signatures are one solution: traceability back to an
       | authentic source would be good evidence that the item isn't fake.
       | This might be provided so some degree by the hardware (the sensor
       | in a camera, for example) or the authoring software.
        
         | 05 wrote:
         | Yeah I'm sure state actors that are capable of creating
         | exploits using undocumented registers on Apple silicon [1]
         | won't be able to break camera sensor DRM..
         | 
         | [1] https://securelist.com/operation-triangulation-the-last-
         | hard...
        
           | joshuanapoli wrote:
           | It's a level of difficulty that will rule out most producers
           | of fake content.
           | 
           | The point is that tracking the provenance of digital items is
           | certainly going to be more important in the future. A
           | signature by Apple silicon is one piece of evidence. It could
           | also be signed by the photographer's identity. The account
           | who uploads it gives another piece of evidence.
           | 
           | Unfortunately, we should probably not trust anonymous content
           | in the future.
        
       | ThomPete wrote:
       | nothing different than virus / anti-virus. It will be fine
        
         | hammyhavoc wrote:
         | https://www.independent.co.uk/news/deepfake-nude-westfield-h...
         | 
         | Is anti-virus really going to fix this?
        
       | robviren wrote:
       | I like to ponder if it is possible to create something that can
       | actually be trusted. I think the complexity and obscurity of a
       | medium would actually suit it to be a trustable recording device.
       | For instance, an audio device recording and storing something
       | outside of human hearing range. No model would be trained to
       | generate those frequencies in that format so it would be useful
       | as a check for if it is real. Something could be said about
       | insanely high resolution imagery. You just need something outside
       | the normal models. Obvious potential for people to adapt, but a
       | short term option maybe.
        
       | herbst wrote:
       | I didn't know there was any doubt about that
        
       | wolverine876 wrote:
       | Fundamental law regarding AI should make it illegal, IMHO, to
       | impersonate a human being. Every AI output should clearly state
       | it's from a computer (no using the anthropomorphic word
       | 'intelligence'): 'This text is generated by a computer.'
       | 
       | Give a valid reason to do otherwise. I can't think of one, unless
       | you want to mislead people. Corporations can still have their
       | chatbot customer service, and add that tagline. The only reason
       | not to is so their customers think they are talking to a human.
        
         | Calavar wrote:
         | The question is how you would enforce a regulation like that
         | when any guy with a few grand to burn can set up their own rig
         | and start churning out images/video/text using their own
         | finetune of an open model.
        
         | RandomLensman wrote:
         | Do you research who wrote a politician's speech, for example?
         | What is the outcome you want to achieve with such regulation?
        
       | thom wrote:
       | One phenomenon I've seen several times during recent conflicts is
       | people swearing blind that an image must be fake, and even
       | insisting some vague shadow is a sixth finger proving their
       | point, even when the photo later proves to have pretty reliable
       | provenance. It's a reasonable immune response and maybe we'll all
       | end up there but it seems just as worrying and tragic as fake
       | images themselves.
        
       | odyssey7 wrote:
       | They're already undetectable for most --- media consumers are
       | currently making their own judgement calls to distinguish
       | deepfakes, based on their intuition and what strangers and
       | friends on the internet tell them. They may be detectable in
       | theory, but every security issue is a zero-day for as long as
       | mitigations are lacking due to accessibility or other reasons.
        
       | cudgy wrote:
       | Personally I prefer the idea that all people be convinced that
       | video can be fake without detection, instead of only a few bad
       | actors knowing this.
        
       ___________________________________________________________________
       (page generated 2024-01-20 23:01 UTC)