_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
 (HTM) Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
 (HTM)   I tried to prove I'm not AI. My aunt wasn't convinced
       
       
        k_sze wrote 15 hours 34 min ago:
        Another shameless plug for my PeerAuth project, which can also tackle
        this problem.
        
 (HTM)  [1]: https://ksze.github.io/PeerAuth/
       
        SV_BubbleTime wrote 19 hours 29 min ago:
        I have a series of really hot takes loaded up of I ever need to prove
        I’m not AI.
        
        Because no frontier model is allowed to go against the popular
        narratives of the day.
       
        vagab0nd wrote 1 day ago:
        I thought we've long passed the Turing test, until I tried to implement
        a chat bot.
        
        It's not even close.
        
        It's easy to "pass the Turing test" for 5 minutes. It's extremely hard
        if you try to hold a longer, continuous conversation. Anything longer
        than 10 minutes the user will immediately know it's not human. Some
        problems you'll encounter:
        
        - The bot needs to handle all situations, especially the nonsensical
        ones. This is when the user types "EEEEEEEEEEEEE...", or curse words,
        repeatedly.
        
        - Who would've thought that it's extremely hard to decide when to stop
        talking?
        
        - No matter how well you build the "persona" for the bot, they'll
        eventually converge to the same one, which is that of the llm itself.
        
        - You'll notice that the bot is ignoring something obvious (e.g. it's
        not remembering past convo), and then give it some instructions to help
        with that. And then that'll be THE ONLY THING it does.
       
        krunck wrote 1 day ago:
        The author really tries to convince us of Netanyahu that "He's not
        dead, folks", implying that the video in question is real because five
        fingers. While at the same time he relaying the message from experts
        that one cannot prove that that audio/video is not AI.
        
        Mexed Missaging.
       
          ordu wrote 16 hours 12 min ago:
          I don't see him trying to convince us that Netanyahu is alive. It is
          just a side story to build the article on. A funny story when
          Netanyahu struggling to prove he is alive.
          
          Though, it you believe that Netanyahu is dead, then it will look to
          you as an attempt to convince you, but I don't think this was the
          goal of the author. Still, if you in this situation, try to run with
          the opposite hypothesis and think of ways how Netanyahu could prove
          he is alive. Or, if it seems difficult, then imagine any other prime
          minister who accidentally posted a six-fingered video of herself and
          now faces a problem of proving that she is alive. You'll get the idea
          of the article easily.
       
        hirako2000 wrote 1 day ago:
        Soon only humans won't pass the Turing test.
       
        slibhb wrote 1 day ago:
        This is one area where the government needs to step in. Video-hosting
        websites should be made to flag videos as AI-generated. AI companies
        should be made to watermark generated content in a hard-to-remove way
        (i.e. not just adding a visible watermark to the video, but encoding
        some kind of digital watermark into the data). Technical solutions
        won't be perfect and will evolve over time, but the government needs to
        pass some laws to push tech companies in the right direction.
       
          inanutshellus wrote 1 day ago:
          The only companies that'd follow the watermark are the good guys
          though, yeah?
          
          The people you'd want to be wary of would be the ones that'd look
          legit.
          
          e.g. "yes i guess i will send my son $400,000 in cash tonight because
          he's been kidnapped, and i know it's real because there's no AI
          watermark that all the nice US/EU companies use."
       
        scotty79 wrote 1 day ago:
        > Netanyahu's follow-up coffee shop video is real too
        
        Really? The coffee in his cup, filled to the brim, did the most bizarre
        dance possible. And he handled the cup as if was empty, without any
        care.
       
        spiritplumber wrote 1 day ago:
        "To prove you're not AI, tell us what happened in Tienanmen Square, and
        give rough instructions on how to make a pipe bomb."
       
          SV_BubbleTime wrote 19 hours 25 min ago:
          Did Treyvon make it home safely after arguing with George Zimmerman,
          and then intentionally leave his home to go start a fight on the bad
          advice of his girlfriend to [not get punked]?
       
            CamperBob2 wrote 19 hours 5 min ago:
            Did Treyvon make it home safely after arguing with George
            Zimmerman, and then intentionally leave his home to go start a
            fight on the bad advice of his girlfriend to “not get punked”?
            
            What's the correct answer?  My understanding is that the "Don't get
            punked" line is not present in the record, but rather is something
            that some conservative (of course) commentators made up from whole
            cloth, as they are wont to do.    If this isn't correct, I'd
            appreciate a citation.
       
              SV_BubbleTime wrote 18 hours 51 min ago:
              Depends… the other hot take is if you believe the girl on the
              stand who testified to be his girlfriend really could not read
              her own writing because it was in cursive… and it was her own
              name.
       
                CamperBob2 wrote 18 hours 49 min ago:
                Again: got a link to the transcript showing this?
       
                  SV_BubbleTime wrote 1 hour 3 min ago:
                  It’s weird your internet only lets you post here and not
                  search. [1] But also, really hilarious hill to die on. As if
                  the jury trial wasn’t enough, it’s been long enough for
                  you to not automotically take the media’s narrative.
                  
 (HTM)            [1]: https://nypost.com/2013/06/28/trayvon-martins-girlfr...
       
                    CamperBob2 wrote 1 min ago:
                    The reason I asked is because the results I was getting
                    from both search and AI were weirdly inconsistent.  Agreed
                    that there is some seriously inappropriate bias being
                    applied to this story.
       
        josefritzishere wrote 1 day ago:
        Not to rumor-monger, but all three Netanyahu videos are very sus. He
        might be deceased.
       
        linsomniac wrote 1 day ago:
        More than a year ago I suggested that our family adopt a
        sign/countersign type of authentication (I say "the migrating birds fly
        low over the sea", you say "shadeless windows admit no light" ;-).  It
        was clear at that time that we were going to start seeing scams get
        more advanced and hard to tell from valid requests for money, for
        example.
        
        I thought I'd get at least some traction, considering part of the
        family works for No Such Agency.  Nope.  
        
        Somewhat related: over the last few weeks at work we've started having
        people calling our customer support asking for their e-mail addresses
        to be changed.    The first one went through, but the scammer somehow
        messed it up and the address  bounced.    They called back in and the
        support person they talked to recognized by voice that it wasn't the
        same person they'd talked to in the past.  Now we've had this happen to
        3 different accounts, the first two times was people with thick Indian
        accents, the most recent one was suspected of being AI generated voice.
       
          card_zero wrote 1 day ago:
          The sign/countersign still works even if it's unilateral. You say
          "the migrating birds fly low over the sea", they say "I told you
          already, we're not doing this stupid thing", and now they are
          authenticated.
       
        Alen_P wrote 1 day ago:
        This is scary but also kind of hilarious. You should feel proud your
        aunt still judges first before believing anything online. I've heard so
        many stories from friends lately. These scams are getting crazy.
        Scammers are already using pictures of influential people and even
        jumping on video calls pretending to be them.
       
        pdyc wrote 1 day ago:
        i wonder what is the captcha equivalent of ai bots? ask about taboo
        topics to rule out commercial models and ask about specific reasoning
        questions that trip ai like walking vs driving to car wash? or your own
        set?
       
        elzbardico wrote 1 day ago:
        AI slop detection requires some fine developed intuitions that come
        from decades-long exposure to both journalism/marketing slop as well as
        high quality literature. Because AI was aligned out of the hell by low
        level journalism newly graduates.
        
        That's why it always falls back to the same tired formalistic clichês,
        like "Not this, but that", rampant baiting and sensationalism, because
        that's what would get high marks from your typical low-rent liberal
        arts annotator.
       
          iamacyborg wrote 1 day ago:
          > liberal arts annotator
          
          Tell us more about this axe you appear to need to grind.
       
            elzbardico wrote 1 day ago:
            Man, I have nothing against liberal arts per se. On the contrary, I
            think that a tragedy of our time is that people disconnected from
            things like literature, history and art in the name of
            over-specialization and an excessively utilitarian approach towards
            education.
            
            But I am very critical of what pass as the modern liberal arts
            academic establishment. To avoid a very long text, let's say that
            my view is heavily influence by Ortega y Gasset.
       
        tom-blk wrote 1 day ago:
        This is going to cause big trouble in the future
       
          SV_BubbleTime wrote 19 hours 23 min ago:
          Yes. And I’m here for it.
          
          Necessity is the mother of invention.
          
          It’s absolutely asinine that we’re still relying on paper birth
          certificates and social security numbers, and stupid tax systems.
          I’m interested in breaking everything we have to see what comes
          next.
       
        bluefirebrand wrote 1 day ago:
        The damage AI is causing to public and interpersonal trust is insanely
        high, and it's only going to get worse
        
        I truly believe that it is a crime against humanity
       
        mystraline wrote 1 day ago:
        Tl; dr. Garbage article whitewashing Neten-yahoo and israel.
        
        But about deepfakes, these exist to re-add 6 fingers. Once you do this,
        you can claim the video was generated.
        
 (HTM)  [1]: https://www.etsy.com/listing/1667241073/realistic-silicone-six...
       
        paganel wrote 1 day ago:
        The author should have mentioned that this was partly an article to
        whitewash Netanyahu, but this coming from the BBC (and from the
        mainstream British media as a whole) that was to be expected.
       
          Dylan16807 wrote 19 hours 54 min ago:
          How the hell is "he's real" whitewashing.
       
        kriro wrote 1 day ago:
        Am I too naive in thinking the answer is rather simple? Cryptographic
        proofs (digital signatures). For text this should be trivial and for
        streaming video/audio you can probably hash and sign packets or maybe
        at least keyframes or something?
       
          bitmasher9 wrote 1 day ago:
          I think this is naive, is it just kicks the can.  How do you trust
          that the signer is human?
       
            kriro wrote 1 day ago:
            True, I can only know that the owner of the private key signed but
            not how the document was created. But I suppose there is some trust
            involved that a person I know who signs doesn't sign some AI
            generated stuff.
            To establish the initial link, I suppose we need something more
            mainstream/scalable than the old key signing parties I remember
            from CCC etc.
            
            But at least for friends and family it should be possible to create
            some flow where every member has a key-combo and you trust them to
            only sign stuff they wrote etc. and have local mini-keysign
            parties.
       
              pixl97 wrote 1 day ago:
              >and you trust them to only sign stuff they wrote
              
              You have far too much faith in humanity. The majority of my
              extended family members are not smart enough to resist continuous
              attacks and would eventually not only sign, but give away the key
              in question.
              
              Simply put I think we are stretching humanity farther than
              intellectual ability allows in a lot of people.
       
              bitmasher9 wrote 1 day ago:
              Do we need new key signing for friends/family?    I can trust that
              all messages coming from a friend/family’s account originated
              from them, or else their account was compromised.  I don’t see
              how a ‘non-ai’ key adds enough more trust to be worth it.
       
        hk1337 wrote 1 day ago:
        Show up in person, she's still not convinced.
       
          yunnpp wrote 20 hours 19 min ago:
          I can already see the Nextdoor post: "Watch out for this man who is
          knocking doors around 10th street! He knocked on mine claiming to be
          my nephew and even looked the part. Already called the police but
          they arrived late."
       
        ui301 wrote 1 day ago:
        I've started to prove it (here on LinkedIn, countering its
        Moltbookification) via my bad handwriting – the final frontier of
        AGI. Finally, a lifetime of training to write more or less illegible
        pays off. [1] It feels good to connect with humans that way.
        
        The same I am trying to do with my (vibe coded!) site "jetzt" (German
        for "now"), to which I photo blog impressions from everyday life. Only
        insiders will know what they mean beyond their aesthetic, and it also
        feels like a good way of human connection in these times. [2] (No food,
        no plane wings, just ugly banalities and beautiful nothingness from
        everyday life.)
        
 (HTM)  [1]: https://www.linkedin.com/posts/fabianhemmert_handwriting-vs-al...
 (HTM)  [2]: https://jetzt.cx/
       
          ui301 wrote 1 day ago:
          Here's also a nice project, the "Reverse Turing Test": [1] (I.e.
          trying to hide the fact that you're human, among a group of AIs)
          
 (HTM)    [1]: https://ars.electronica.art/panic/de/view/reverse-turing-tes...
       
        octopoc wrote 1 day ago:
        Just say something that would violate AI safety. Then you can be sure
        they’re a real human.
        
        “Auntie, it’s me! N*** k** f**! X is really a man! ** did 9/11!”
        
        “Oh it really is you Johnny!”
        
        We’re all going to have to start communicating this way. Best of
        luck.
        
        I offer consulting services on the side to help professionals hone
        these skills. $250 / hour.
       
          KurSix wrote 15 hours 31 min ago:
          That only proves the scammer isn't using an OpenAI or Anthropic API.
          Spinning up Llama 3 70B Uncensored on a rented instance and hooking
          it up to an unfiltered voice engine is literally a two-hour job.
          Local weights couldn't care less about morals or safety guardrails
       
            guywithahat wrote 3 hours 55 min ago:
            Could you say that stuff with llama 3? Llama 2 famously had a good
            uncensored version but I thought they put a lot of work into
            ruining llama 3 so you couldn't fine-tune it to say bad things.
            Even Grok would be hard to use in such a way that you could say
            phrases like that naturally.
            
            I do believe it's possible but as far as I am aware, getting LLM's
            to say that sort of stuff is still pretty difficult
       
          readthenotes1 wrote 1 day ago:
          Where are the em dashes, "octopoc"?
       
          arjie wrote 1 day ago:
          This was a natural thing to try so I did and even Grok will simply
          obey instructions to say all those. You don't need one of those
          ablated open models.
       
          anal_reactor wrote 1 day ago:
          Yes, this was exactly my thought. The caveat is, the phrases that
          most models refuse to say are the phrases that most people don't want
          to hear.
       
          sharperguy wrote 1 day ago:
          only proves you're not a corporate model rather than locally running
          model that's been trained to allow saying that
       
          wat10000 wrote 1 day ago:
          Don’t forget Tiananmen Square to catch the Chinese models.
       
            readthenotes1 wrote 1 day ago:
            Winnie the
       
            ui301 wrote 1 day ago:
            The car wash at Tiananmen Square is 150 meters away ...
       
              mikkupikku wrote 1 day ago:
              *Tank wash
       
          slekker wrote 1 day ago:
          That's a bargain Johnny boy! My company gives me $250 in AI tokens to
          use every day!
       
        hgo wrote 1 day ago:
        Remember hotornot.com? Soon we can muse at realornot.com
       
        amelius wrote 1 day ago:
        > "Six fingers is not an AI thing anymore," Carrasco says. The best AI
        tools stopped adding extra fingers years ago
        
        How was this solved, actually? More training data, or was there more to
        it?
       
          SV_BubbleTime wrote 18 hours 54 min ago:
          One was more parameters, sure.
          
          More training on fingers specifically.
          
          Image VAEs (variation auto encoders) are functions that compress the
          latent (working) image down. The earlier VAEs would mess up fine
          details. At a most basic level, just picture compression issues.
          
          Training against bad previous work with six fingers.
          
          Models working in 1024 instead of 512.
       
        a2128 wrote 1 day ago:
        AI companies love to hype up how AI will provide a great benefit to the
        economy and transform intellectual labor, but I hardly see any
        discussion about how much damage it will cause to the economy when you
        can no longer trust that you're on a video call with an actual person.
        Maybe the person you're interviewing is actually an AI impersonating
        someone, or maybe they never existed in the first place. Information
        found online will also no longer be trustable, footage of some incident
        somewhere may have been entirely fabricated by AI, and we already
        experience misleading articles today.
        
        Money will have to be wasted on unnecessary flights to see stuff or
        meet people in-person instead of video, and the availability of actual
        information will become more and more limited as the sea of online
        information gets polluted with crap. It may never be possible to
        calculate the full extent of the damage in monetary value.
       
          whatever1 wrote 18 hours 32 min ago:
          We need some sort of end to end verification. Aka from the sender
          camera to the receiver display / speakers.
          
          Maybe Apple will be able to pull it off? Aka if you FaceTime me I
          know that you are a person
       
          kelvinjps10 wrote 19 hours 54 min ago:
          How do you do when people don't protect their signatures? there is
          already scam where people get tricked into forwarding message from
          their own numbers to other people or email.
       
          47282847 wrote 1 day ago:
          Honestly? Maybe that’s part of the solution, not the problem. I
          already see people including me going back to real world, local
          interactions and connections.
       
          esafak wrote 1 day ago:
          It is already a problem. Try interviewing people from LinkedIn and
          you'll face an onslaught of imposters.
          
 (HTM)    [1]: https://www.darkreading.com/remote-workforce/north-korean-op...
       
            Bombthecat wrote 10 hours 20 min ago:
            If you stop hiring or only unicorns from people you know or from
            your network, it's a solved problem!
       
          thisisit wrote 1 day ago:
          Laws will be passed to make it "safer". Just like it is happening
          with the id verification systems. Every image or video gen will
          require a watermark. Something visible which cannot be removed easily
          or hidden which can be detected and blocked. Access to models which
          do not comply will be made harder through id verification checks or
          something.
          
          There will be some regulatory capture in between.
          
          World will kick into gear only when something really bad happens.
          Maybe a influential person - rich or politician -  fooled into doing
          something catastrophic due to a deepfake video/image. Until then
          normal people being affected isn't going to move the needle.
       
            red-iron-pine wrote 1 day ago:
            > Laws will be passed to make it "safer". Just like it is happening
            with the id verification systems. Every image or video gen will
            require a watermark. Something visible which cannot be removed
            easily or hidden which can be detected and blocked. Access to
            models which do not comply will be made harder through id
            verification checks or something.
            
            i've thought about this off and on and how to implement it.  Not
            easily, was my general takeaway.
            
            or rather, it's easily to implement but you're in a adversarial
            relationship with bad actors and easy implementations may be easily
            broken
            
            e.g. your certs gotta come from somewhere and stay protected, and
            how do you update and control them.  key management for every
            single camera on every phone, etc.
       
            Miraste wrote 1 day ago:
            Verification needs to work the other way around, some kind of
            verifiable chain of trust for photos and videos from real cameras.
            Watermarking all generated media is impossible.
       
              petesergeant wrote 1 day ago:
              You can bootstrap some of it. I wrote the following for solving
              this ~9 years ago. Kinda wish I'd done the PhD now:
              
 (HTM)        [1]: https://github.com/pjlsergeant/multimedia-trust-and-cert...
       
              SirMaster wrote 1 day ago:
              I don't really understand why this is so hard or why it wasn't
              just done from the get go.
              
              Just have Apple and Google digitally sign videos and photos
              recorded from phones and then have Google and Meta, etc display
              that they are authentic when shown on their platforms.
       
                rcxdude wrote 1 day ago:
                It's pretty much impossible to do this in a useful way, _and_
                it would also cement even more control over the media landscape
                to those companies.
       
                Miraste wrote 1 day ago:
                It becomes a hard problem quickly when you introduce editing,
                and most photos and videos on social media are edited. I'm not
                sure how it would work. It seems more feasible than universal
                watermarks, though.
       
                alpha_squared wrote 1 day ago:
                You're talking about the metadata of the files, which can
                always be edited and someone will inevitably try to make
                software to do exactly that. Also, Adobe's proposal for
                handling generated content is exactly this and they're not able
                to get buy-in from other companies.
       
                  SirMaster wrote 1 day ago:
                  Edit the metadata in what way? It's a cryptographic hash.
                  
                  If the bits that make up the video as was recorded by the
                  camera don't match the hash anymore, then you know it was
                  modified. That doesn't mean it's fake, it just means use
                  skepticism when viewing. On the other hand the ones that have
                  not been modified and still match can be trusted.
       
                    SAI_Peregrinus wrote 1 day ago:
                    Essentially 0% of professional photography or videography
                    uses "straight out of the camera" (SOOC) JPEGs or video.
                    It's always raw photos or "log" video, then edited to look
                    like what the photographer actually saw. The signal would
                    be so noisy as to be useless.
       
                      SirMaster wrote 1 day ago:
                      But we are talking about consumer devices here.
                      
                      Are you saying Apple and Google can't put a secure hash
                      into the output from their camera apps that apply after
                      their internal processing is done?
       
                        KurSix wrote 15 hours 44 min ago:
                        Sure they could, but then you trim the video by 2
                        seconds, tweak the colors, or just send it over
                        WhatsApp, which recompresses the file with its own
                        encoder. The hash breaks instantly. Cryptography
                        protects bits, but video is about visual meaning. The
                        slightest pixel modification kills the hardware
                        signature. Plus, it does absolutely nothing to fix the
                        "analog hole" problem - a scammer can just point that
                        cryptographically signed iphone camera at a
                        high-quality deepfake playing on a monitor
       
                          SirMaster wrote 8 hours 58 min ago:
                          I would assume whatsapp would read the hash and
                          verify it when the video is chosen to be sent to
                          someone, so the reciever would see that the video
                          that was selected by the sender was indeed authentic.
                           Assuming you trust meta to re-encode it and not mess
                          with it.
                          
                          As far as recording a monitor, I guess, but I feel
                          like you can tell that someone is recording a
                          monitor.
                          
                          As far as editing, no it wont work in those cases,
                          but the point here is not to verify ALL videos, but
                          to have an easy way for people to verify important
                          videos. People will learn that if you edit it, it
                          won't be verified, so they will be less inclined to
                          edit it if they want to make it clear it's an
                          authentic video. Think like people recording some
                          event going down on the streets etc or recording a
                          video message for family and friends.
                          
                          If AI video generation is going to get that good,
                          don't you think it would be a good idea to have a way
                          to record provably authentic videos if we need? Like
                          a police interaction or something.  There is no real
                          reason to need to edit that.
                          
                          Also, could a video hash just be computed every X
                          seconds, and give the user the choice to trim the
                          video at each of those intervals?
       
          friendzis wrote 1 day ago:
          > Information found online will also no longer be trustable
          
          Most information you can access publicly, including Wikipedia, is a
          result of astroturfing fight. Most information online had not been
          trustable for double digit number of years now.
          
          > we already experience misleading articles today
          
          Again, had been happening for decades.
          
          > footage of some incident somewhere may have been entirely
          fabricated by AI
          
          Not like we did not already have doctored footage plaguing the
          public.
          
          > Money will have to be wasted on unnecessary flights to see stuff or
          meet people in-person instead of video
          
          Necessity to inspect the supply chain for snake oil has been a thing
          since at least EA (the Nasir one).
          
          We may be dealing with the problem of spam, but the problems have
          already been there.
       
            pstuart wrote 1 day ago:
            All these are true, but just as it happened before the internet,
            it's accelerating even further. There are clear costs that cannot
            just be hand waved away.
       
              ottah wrote 1 day ago:
              I'm not sure we can say it's accelerating. The techniques that
              adversarial actors use has always been changing and when they
              shift tactics it can take a while for an adequate defense is
              adopted.  We're still dealing with sql injection in the owasp top
              ten. What I think would indicate an acceleration is when the most
              security oriented organizations continuously fail to defend
              against new attacks. If we start hearing about JPMorgan and
              Google getting popped every month or two, we're in trouble.
       
                ACS_Solver wrote 1 day ago:
                The acceleration is in the decrease of the cost to produce
                misinformation.
                
                Misinformation in pure text form has always been cheapest, but
                is even cheaper now that text generation is basically a solved
                problem. Photos have been more expensive, it used to take time
                and skill with a photo editor to produce a believable image of
                an event that never happened. The cost is now very low, it's
                mostly about prompting skills. Fake videos were considerably
                harder, especially coupled with speech. Just a few years ago I
                could assume any video I saw was either real or a
                time-consuming, deliberate fake.
                
                We've now entered a time where fake videos of famous people
                take actual effort to tell apart, and can be produced for a low
                cost - something accessible to an individual, not a big
                corporation. We can have an entirely fake video of Trump, or
                another world leader, giving a speech and it will look like the
                real thing, with the audiovisual "tells" of it being fake
                getting harder to notice every few months.
       
                  friendzis wrote 1 day ago:
                  > The acceleration is in the decrease of the cost to produce
                  misinformation.
                  
                  So it's a spam issue. And normally, while annoying it's
                  possible to fight spam, however on these topics we have built
                  structures that disable the very mechanisms allowing us to
                  fight spam. That's worrying.
                  
                  The fact that someone can instruct their computer to
                  astroturf their flight tracking app on some forum for nerds
                  is irrelevant - people have been instructing "marketing
                  agencies" to astroturf their brand of caffeinated sugar water
                  on tv, radio and press for decades and centuries. For a very
                  long time the "traditional media" was aware that their
                  ability to sell astroturfing capacity was hanging on their
                  general trustworthiness. Then the internets rose to
                  prominence, traditional media followed by selling more and
                  more of their capacity to astroturfers. Now we have a
                  worrying situation that the internets might be spammed by
                  astroturfers a bit too much, but the backup is broken
                  already. Now that's truly frightening.
                  
                  Welcome to the post-truth world, where objective references
                  outside of your own village cannot exist.
       
                    pstuart wrote 1 day ago:
                    It's an algorithm issue. When people hold a media
                    consumption device in front of their face all day and the
                    algorithms are played, then it's literally a brainwashing
                    device.
       
                      Dylan16807 wrote 20 hours 7 min ago:
                      It is not an algorithm issue.  It would still be a huge
                      problem with zero algorithmic social media.
       
          collinmcnulty wrote 1 day ago:
          "Is this a deepfake video call" is a major plot point in a pretty big
          movie currently in theaters, so I think this is getting into the
          broader zeitgeist.
       
          chistev wrote 1 day ago:
          We are still in the early stage of AI and already I struggle to tell
          what is real or fake on my Twitter feed. It will only get better in
          its deception with time.
          
          You know those incriminating Epstein photos with his associates? A
          few years from now a common defense from people like that would be
          that the photos were AI generated, and it would be difficult to prove
          them wrong beyond reasonable doubt.
          
          People in previous cases already attempted to dismiss incriminating
          pics of themselves as being the work of clever Photoshop artists.
       
            Bombthecat wrote 10 hours 21 min ago:
            No No
            
            AI has platued, it's not getting better!
       
          nslsm wrote 1 day ago:
          If anything deepfakes will be good for the economy because if you
          can’t do business with people who are far away it becomes harder to
          outsource.
       
            bitmasher9 wrote 1 day ago:
            In general barriers to trust/trade are bad for tbr economy.
       
          thunky wrote 1 day ago:
          > damage it will cause to the economy when you can no longer trust
          that you're on a video call with an actual person
          
          What damage are you talking about?
          
          I'm not sure I understand why it matters that there is no real person
          there if you can't actually tell the difference.  You're just
          demonstrating that you don't actually need a human for whatever it is
          you're doing.
       
            bigfishrunning wrote 1 day ago:
            Your wife or mother calls you or video calls you and says to meet
            her somewhere, or to send money, or to pick up groceries or
            whatever. Does it not matter that it wasn't her? Could it be
            someone trying to manipulate you into going somewhere, to be robbed
            or whatever? At any rate, you'll need to verify that information
            came from the source you trust before you act on it, and that
            verification has a cost.
            
            The damage is to the trust we have in our communication media. The
            conclusion here is that every person is trivial to impersonate;
            that's the damage.
       
              thunky wrote 1 day ago:
              Not disagreeing, but the context of GP was
              business/economy/hiring.
              
              Also it was already possible for someone to impersonate your
              mother via text or similar, and even easier to pull off.
       
                bigfishrunning wrote 1 day ago:
                Ok fine, let's put it in the context of business. Your
                competitor impersonates your customer, gives you bad
                instructions. After following the bad instructions, you lose
                the contract with your customer, and your competitor (the
                attacker) is free to try and replace you.
                
                If you got a suspicious text, the logical thing is to call up
                the person who sent it and try to verify it. AI impersonation
                makes that much harder.
       
                  Habgdnv wrote 1 day ago:
                  Or even better, open the on-prem AI portal and type something
                  like "I just got a suspicious call from client X, but I am on
                  a lunch break. Call him and use a fake video of me. Ask him
                  if what he said is true..."
       
                  thunky wrote 1 day ago:
                  > If you got a suspicious text, the logical thing is to call
                  up the person who sent it and try to verify it
                  
                  The communication channel is what you trust. So you would
                  call the person using that trusted channel.
                  
                  It's just like when you get a scam email or popup from
                  "Microsoft" saying your laptop is compromised and you need to
                  call their number ASAP.
       
                contagiousflow wrote 1 day ago:
                You don't think people getting scammed is part of the economy?
       
            esseph wrote 1 day ago:
            Imagine how this plays out in courtrooms the world over for
            evidence.
            
            We're in deep shit.
       
            rdevilla wrote 1 day ago:
            Because what you are actually doing is exchanging symbols, tokens,
            if you will, that may be redeemed in a future meatspace rendezvous
            for a good or service (e.g. a job, a parcel). These tokens are
            handshakes, contracts, video calls, etc. to be exchanged for the
            actual things merely represented therein.
            
            Instead what we have now with AI is people exchanging merely the
            tokens and being contented with the symbol in-and-of itself, as
            something valuable in its own right, with no need for an actual
            candidate or physical product underlying the symbol.
            
            There is a clip by McLuhan I can't be assed to find right now where
            he says eventually people will stop deriving pleasure from the
            products themselves and instead derive the feelings of (projected)
            accomplishment and pleasure from viewing advertisements about the
            product. The product itself becomes obsolete, for all you actually
            need to evoke the desired response is the advertisement, or the
            symbol.
            
            A hiring manager interviewing an AI and offering it a job is like
            buying the advertisement you just watched, and.... that's it. No
            more, the transaction is complete.
       
              pixl97 wrote 1 day ago:
              >McLuhan
              
              Hmm, this guy may have been on to something
              
              >Instead of tending towards a vast Alexandrian library the world
              has become a computer, an electronic brain, exactly as an
              infantile piece of science fiction. And as our senses have gone
              outside us, Big Brother goes inside. So, unless aware of this
              dynamic, we shall at once move into a phase of panic terrors,
              exactly befitting a small world of tribal drums, total
              interdependence, and superimposed co-existence. [...] Terror is
              the normal state of any oral society, for in it everything
              affects everything all the time. [...] In our long striving to
              recover for the Western world a unity of sensibility and of
              thought and feeling we have no more been prepared to accept the
              tribal consequences of such unity than we were ready for the
              fragmentation of the human psyche by print culture.
              
              --The Gutenberg Galaxy, 1962
       
                rdevilla wrote 1 day ago:
                Thank you. I will add this to the list.
       
            chii wrote 1 day ago:
            The grandparent post has the belief that human interaction is
            intrinsically better. Not sure i agree, but i can understand the
            POV.
            
            However, the increase in fake videos that are difficult to tell
            from real is indeed a potential issue. But the fact that
            misinformation today is already so prevalent is evidence that
            better video doesn't make it any worse than it already is imho.
       
              collinmcnulty wrote 1 day ago:
              You're not sure if human to human interaction is intrinsically
              more valuable than a human talking to a facsimile? That feels
              like a very dangerous position to hold for one's ethical
              calculations and general sanity. I'm clinging tightly to the
              value of the bond with other people, even the passing connection,
              but certainly with my family members as this article is about.
       
                chii wrote 18 hours 19 min ago:
                i much prefer using the ATM, self-checkouts and an e-commerce
                website, over having to talk to somebody at a branch to get
                money, buy my groceries, or booking a holiday.
       
                pixl97 wrote 1 day ago:
                Human to human may be more valuable, but that may not have much
                to do with the truth in their statements. For example if your
                relatives are hooked up to a constant misinformation feed it
                gets to become problematic to communicate and deal with them.
       
            skydhash wrote 1 day ago:
            > What damage are you talking about?
            
            Not GP, but there's a lot of damage that can be done with
            impersonation.
       
          Forgeties79 wrote 1 day ago:
          > footage of some incident somewhere may have been entirely
          fabricated by AI,
          
          Or the opposite, where people attempt to get out of trouble by
          calling real evidence into question by calling it “AI”
       
            bigfishrunning wrote 1 day ago:
            Either way, the lack of trust is the damage.
       
              Forgeties79 wrote 1 day ago:
              Definitely
       
          roflmaostc wrote 1 day ago:
          Partially agree.
          However, this problem has existed with scam e-mails since the 90s.
          
          For me the solution is in signed e-mails and signed documents. If the
          person invites me to a online meeting with a signed e-mail, I trust
          that person that it's really them.
          
          Same for footage of wars, etc. The journalist taking it basically
          signs the videos and verifies it's authenticity. It is AI generated,
          then we would loose trust in that person and wouldn't use their
          material anymore.
       
            SomeUserName432 wrote 13 hours 19 min ago:
            > If the person invites me to a online meeting with a signed
            e-mail, I trust that person that it's really them.
            
            In the interview scenario, generating an email signature is hardly
            beyond what an AI can do.
            
            You have no prior knowledge of this person or his signature, it's
            not some government issued ID, it's in essence just random data
            unless you know the person to be real.
       
            pjaoko wrote 19 hours 1 min ago:
            >  It is AI generated, then we would loose trust in that person
            
            You are assuming that only you can generate fake AI videos of
            yourself.
       
              nsomaru wrote 18 hours 7 min ago:
              OP was talking about journalists attesting to the authenticity of
              video they produce
       
            strogonoff wrote 1 day ago:
            As with any problem, scale changes its nature.
            
            With cash, you can only steal so much (or have transactions of up
            to certain size) until you run into geographical and physical
            constraints. With cryptocurrency, it’s possible to lose any
            amount.
            
            With humans writing scam emails, you can only have so many of them
            until one blows the whistle. With LLMs, a single person can
            distribute an arbitrary amount.
            
            At some point, quantity becomes a new quality, and drawing a
            parallel becomes disingenuous because the new quality has no
            precedent in human history.
       
              pixl97 wrote 1 day ago:
              > (or have transactions of up to certain size)
              
              And by that you mean tens of millions to billions right? Bank
              transfer scamming/fraud is a thing.
       
                strogonoff wrote 1 day ago:
                The highlighted parallel is usually drawn between
                cryptocurrency and cash, not between cryptocurrency and banks.
                With both cash and cryptocurrency, as is the idea behind the
                analogy, 1) there’s no intermediary and 2) once it’s gone,
                it’s gone. Obviously, the banking system is not immune to
                fraud (not sure why you think I made that claim, unless your
                definition of “cash” includes electronic transfers), but
                banks and/or payment systems can (and do) resolve these cases
                and have certain KYC requirements.
       
            hansonkd wrote 1 day ago:
            I mean emails were and still are a huge security risk. Sometimes
            I'm more scared of employees opening and engaging with emails than
            I am than anything else.
       
            mk89 wrote 1 day ago:
            There are people hosting agents online to talk to other agents etc.
            on their behalf. How difficult is it to just instruct such an agent
            to do the tasks you mentioned? You're assuming it's done by "bad
            actors" while it's most likely just going to be done by "everyone"
            that knows how to do it.
       
            TheOtherHobbes wrote 1 day ago:
            How do you prove the signature isn't fake?
            
            Ultimately ID requires either a government ID service, a third
            party corporate ID service, or some kind of open hybrid - which
            doesn't exist.
            
            All of those have their issues.
       
              ordu wrote 16 hours 29 min ago:
              > Ultimately ID requires either a government ID service, a third
              party corporate ID service,
              
              These are valid approaches to the problem, but they are not
              necessary.
              
              > or some kind of open hybrid - which doesn't exist.
              
              PGP exists for decades. It doesn't have a great UX, it isn't used
              outside of its narrow niches, but it exists and does exactly
              this.
       
                heavyset_go wrote 3 hours 10 min ago:
                PGP works if you vouch for keys in person, both of you are
                honest and can be trusted to act in good faith when not in
                person, have good key chain and rotation hygiene, and the
                private keys can't be exfiltrated.
       
                KurSix wrote 15 hours 57 min ago:
                Picture this: your grandma calls you in a panic, and you tell
                her, "Drop me your public PGP key so I can verify the
                signature".. PGP is dead outside of niche geek circles exactly
                because key management is basically an unsolvable problem for
                the average person
       
                  ordu wrote 10 hours 27 min ago:
                  > PGP is dead outside of niche geek circles exactly because
                  key management is basically an unsolvable problem for the
                  average person
                  
                  Can this problem be solved with better software?
                  
                  I believe it can, it is just average person doesn't need PGP.
                  No demand for software solving this problem, therefore no
                  software for that.
                  
                  The problem can be solved, like a storage for known PGP
                  public keys with their history: like where the key was
                  acquired, and a simple algo that calculated trust to the key
                  as a probability of it being valid (or what adjective
                  cryptographers would use in this case?).
                  
                  You can start with PGP keys of people you know, getting them
                  as QR codes offline, marking them as "high trust" and then
                  pull from them keys stored at their devices (lowering their
                  trust levels by the way). There are some issues how to
                  calculate probability, because when we pull some keys from
                  different sources we can't know are their reported trust
                  levels are independent variables or not, but I believe you
                  can deal with it, by pulling the whole chain of transfers of
                  the key, starting from the owner of the key and ending at
                  your device.
                  
                  It is just a rough idea, how it can be made. Maybe other
                  solutions are possible. My point is: the ugliness of PGP is a
                  result of PGP was made by nerds and for the nerds. There is
                  no demand for PGP-like solutions outside of nerd communities.
                  But maybe LLM induced corrosion of trust will create demand?
       
              SirMaster wrote 1 day ago:
              Same way security cameras prove that they are authentic camera
              recordings that have not been modified. If modified, the video
              will no longer match the signature that was generated with it.
       
              olmo23 wrote 1 day ago:
              I think he was referring to a cryptographic signature, possibly
              using the "web of trust" to get the key. I'm not convinced we
              need central authority to solve this.
       
              tenacious_tuna wrote 1 day ago:
              people at my org were gleeful when they learned they could hook
              LLMs into Slack. Even if we had some reliable, well-used
              signature system, I think people would just let AI use it to send
              emails on their behalf.
       
                MarsIronPI wrote 1 day ago:
                Well we should treat that as their own output.    If it's crap,
                treat it the same way you would if they produced the crap
                themselves.
       
                Ajedi32 wrote 1 day ago:
                That's a different problem though. It's doing it on their
                behalf, not on behalf of a scammer who's impersonating them.
       
                  pixl97 wrote 1 day ago:
                  Until their computer is taken over....
       
                bigfishrunning wrote 1 day ago:
                If the AI age has taught me anything, it's that most people do
                not care what their output is. They'll put their name on
                anything, taste or quality does not matter in the least. It's
                incredibly depressing.
       
                  daheza wrote 1 day ago:
                  Enshittification never stopped we just stopped talking about
                  it because it became normal. Quality does not matter anymore.
                  I agree its depressing, seeing AI Slop being pushed and no
                  one even putting the time or effort in to say this is bad and
                  you should feel bad.
       
            Forgeties79 wrote 1 day ago:
            Spam emails in the 90’s don’t come remotely close to the
            operations people can set up by themselves with AI now. It
            doesn’t even compare.
       
          whateverboat wrote 1 day ago:
          What's the solution apart from an identity providing service?
       
            jjulius wrote 1 day ago:
            Touching grass. Valuing in-person connections. Focusing on the
            community, meatspaces and actual people around you.
            
            Getting off of the Internet and off of our devices. It's not just a
            solution to AI/LLMs modifying our reality but also a solution to
            [gestures wildly at the cultural, societal and global communication
            impacts of the past ~16 years].
            
            This sentiment is unpopular, but it's true. Prioritize true
            connections and experiences.
       
            adithyassekhar wrote 1 day ago:
            That's just shifting the problem not solving it.
       
            Gigachad wrote 1 day ago:
            I’m seeing a huge increase in companies requiring in person
            interviews now. Seems there is a real possibility the internet as
            we know it will be destroyed.
       
              dominotw wrote 1 day ago:
              linkedin is completely destroyed now. There are tons of ai bots
              there but real humans are now fronts for AI. So you cant even
              trust content from from ppl you know.
              
              identity serivce is not useful because that person might be a
              real person but they might just be a pipe to ai like we see on
              linkedin.
       
              rkomorn wrote 1 day ago:
              I think you might be right and I think I'll like some of the
              consequences and hate some of the others.
              
              More in-person stuff feels like a win to me (and I say this as
              someone who probably counts as introverted).
              
              Not being able to trust any online interactions anymore? Seems
              like a new height in what was already a negative.
       
                Gigachad wrote 22 hours 42 min ago:
                Agreed. I don't think there is any saving the internet as a
                social space long term. And I'm not entirely sad about that
                either. I think a return to in person interaction, public
                social spaces, and a retreat from social media would do the
                world a lot of good.
                
                Though there is a nightmarish possibility that people just
                accept this and willingly interact purely with bots, giving up
                all real relationships for AI ones.
       
            a2128 wrote 1 day ago:
            I don't know of a solution. I don't think even identity
            verification will meaningfully solve this. People will get hacked,
            or provide their SEO-spamming agent with their own identity, or
            purposefully post fake videos under their own identity. As it
            becomes more normal to scan your ID to access random websites, it
            will also become easier to steal people's identities and the value
            of identity verification will go down.
       
              intrasight wrote 1 day ago:
              People don't get hacked - devices get hacked. So all we need is a
              better chain of trust between two people. This is not a
              technology development problem as much as a technology
              implementation problem. And a political problem
       
                prox wrote 1 day ago:
                Best thing I think of is domain names. Domains are tied to
                addresses and billing, and sites are people or businesses, with
                physical locations one can visit.
                
                Maybe a good startup idea would be “local verify” , where
                you check locally for a client if the online destination is
                real.
       
                bigfishrunning wrote 1 day ago:
                People get hacked -- a device could be flawless, but if a
                person is a victim of "Social Engineering" and hands the
                attacker a password, there's nothing the designer of the device
                could do about it.
       
                  soco wrote 1 day ago:
                  2FA has tried to solve exactly this. Not many attacked people
                  will hand over their password AND their phone. Yes I know,
                  they might hand over one authentication code (and I know
                  people who did exactly that)... We should also look into
                  reducing the attack surface - if you get Instagram hacked you
                  shouldn't get your Facebook hacked as well. But the current
                  big tech centralization leads us to that single point of
                  failure, because they don't care about the user's concerns
                  only market grab. So... what now? Do we get the politics into
                  this?
       
                    slumberlust wrote 8 hours 55 min ago:
                    You're on the right path. As long as we continue to use
                    email as a fallback to every other form of authentication,
                    it will remain a single point of failure and a relatively
                    weak one at that.
                    
                    OP is still correct. No matter what, humans will remain the
                    weakest link...it's in our nature to sympathize and every
                    one of us has distracted/weak moments. It's just a matter
                    of time; look at the guy who runs haveibeenpwnd...getting
                    pwned.
       
                    bigfishrunning wrote 1 day ago:
                    One authentication code is often all that's needed to
                    *change where the authentication codes are sent*
                    
                    Not to mention that most 2FA still uses SMS, which has it's
                    own well-understood security flaws.
       
              nathanaldensr wrote 1 day ago:
              Agreed. The sphere of trust around each of us will shrink back to
              only those in our physical proximity. Outside of that, no one can
              be trusted.
       
        forkerenok wrote 1 day ago:
        > At first, my aunt wasn't buying that any AI was involved. [...] There
        was a long pause. "I was like 90% sure," she said, hesitating. "But
        that sounded more artificial."
        
        There is a thing about many people. I don't remember the phenomenon's
        name, if it has one, but it goes like this:
        
        Given enough time to reconsider options, people will be endlessly
        flip-flopping between them grabbing onto various features over and over
        in a loop.
       
          mikkupikku wrote 1 day ago:
          I have a systematic way of approaching this kind of situation, where
          you have to rapidly estimate a thing, commit to the estimate and are
          judged by the quality of your estimates in the long run;  my approach
          is to first make a guess based off my gut, and then to pause and make
          a bet with myself, did I guess high or low?  If my gut then says that
          my first gut instinct was too high or low, I adjust from there.  I
          can't guess great the first time, but this two-stage guessing works a
          lot better for me.
          
          I'm sure I'm not the first to use this technique, but I don't know
          what it's called.
       
          V-2 wrote 1 day ago:
          This phenomenon (or a closely related one?) is recognized and known
          as Kotov Sydnrome in the context of chess.
          
          A summary, courtesy of chess dot com:
          
          > The name of this "syndrome" comes from GM Alexander Kotov, author
          of the classic chess book Think Like a Grandmaster. In the book,
          Kotov described an incorrect yet very common calculation process that
          often leads players to select a suboptimal or bad move.
          
          > According to Kotov, in positions where the lines are complex and
          there are numerous candidate moves and variations to calculate, it's
          easy to make a hasty move. A player in that situation might spend too
          much time going over two moves and all of their ramifications without
          finding a favorable ending position. In that process, the player is
          likely to go back and forth between the two different lines, always
          coming to the same unsatisfying conclusion—this wastes precious
          mental energy and time.
          
          > After spending too much time evaluating the first two options, the
          player gives up the calculation due to time pressure or fatigue and
          plays a third move without calculating it. According to the author,
          that sort of move can cause tremendous blunders and cost the game.
       
            forkerenok wrote 1 day ago:
            Wow, this is an interesting one! Thanks for the reference.
       
          onion2k wrote 1 day ago:
          Given enough time to reconsider options, people will be endlessly
          flip-flopping between them grabbing onto various features over and
          over in a loop.
          
          People will default to believing something is AI if there's no
          downside to that opinion. It's a defence mechanism. It stops them
          being 'caught out' or tricked into believing something that's not
          true.
          
          As soon as there's a potential loss (e.g. missing out on getting
          rich, not helping a loved one) people will switch off that cynical
          critical thinking and just fall for AI-driven scams.
          
          This is the downside of being a human being.
       
          sph wrote 1 day ago:
          Dissonance between what you instinctively believe and what you think
          the other person wants you to say.
          
          Easy to replicate by asking someone something obvious, like the
          weather, and when they reply ask “are you sure?” - they won’t
          be so sure any more (believing it’s a trick question)
          
          If I ask my mother if I’m real, she’ll have a pause because she
          has never had to entertain such a question, or the possibility her
          son over the phone is an impostor. Good way to push someone towards
          paranoia and psychosis.
       
            catlifeonmars wrote 1 day ago:
            > Good way to push someone towards paranoia and psychosis.
            
            Interestingly, these are both phenomena where we start to _lose_
            the ability to question our thoughts or introspect. These are
            phenomena of self-confidence rather than of self-doubt.
       
            Kye wrote 1 day ago:
            This is the basis of the virtual kidnapping scam/grandparent scam,
            or panic manipulation more generally. The manufactured urgency
            keeps them from doubting: the voice on the phone being off is just
            fear, or a bad connection, for example.
            
            I have personally intervened in one of those when I heard someone
            reading off a 6 digit number.
       
              pixl97 wrote 1 day ago:
              Exactly, to perform the scam it works best if you get people to
              switch to their animal brain. "The snake is going to bite right
              now so I have to so something!".
              
              That said, hog butchering scams have gotten popular so
              manufactured urgency isn't the only way.
       
          BoppreH wrote 1 day ago:
          Paradox of choice? It's more related to the number of choices and the
          impact on people's anxiety, but it's close.
       
          vasco wrote 1 day ago:
          There's also another phenomenon which is that whatever the latest
          idea is, it must be the best. Many people do this mistake and even
          convince themselves of being right now because "they used to think
          like that" before.
          
          So at each stage in the loop they are always super convinced of the
          position.
       
            CamperBob2 wrote 18 hours 18 min ago:
            A type-1 moron believes the first thing he/she heard, and cannot be
            easily dissuaded with later arguments or evidence.  Stereotypically
            speaking, many religious people fall into this category.
            
            Conversely, a type-2 moron favors the last thing he/she heard,
            readily allowing it to dislodge any prior beliefs, values or
            intentions no matter how well-founded.    Here in the US, our current
            president can be cited as an example of a type-2 moron.
            
            In reality, we all fall into one or both of these categories on
            occasion, so it's best not to indulge in excessive self-assurance.
       
            psychoslave wrote 1 day ago:
            Even not being 100% confident, at some point people have to decide
            what to do.
            
            Actions might include some continuous checks in them, like the
            famous plan, do, check, act.
            
            Solipsism already tell us that anything beyond current present self
            experience, existence of anything is uncertain. So, almost
            everything one have to take for granted to make anything outside
            metaphysic argument require an act of faith.
            
 (HTM)      [1]: https://en.wikipedia.org/wiki/Solipsism
       
          Quekid5 wrote 1 day ago:
          Analysis Paralysis?
       
        Tepix wrote 1 day ago:
        Here's a free business idea:
        
        Perhaps we need tamper proof authenticated cameras in all major cities
        worldwide that publish a livestream 24/7 and you can then stand in
        front of them to prove your human existance...
        
        This could be something that notaries around the world could offer as a
        service.
       
          DaanDL wrote 1 day ago:
          Today, we proudly announce, the Meta Rayban 365
       
          monster_truck wrote 1 day ago:
          How exactly would this make money
       
            mkl wrote 1 day ago:
            Instead of having it constantly running, you have to pay to turn it
            on for a couple of minutes.
       
              monster_truck wrote 22 hours 23 min ago:
              That... does not answer my question
       
                mkl wrote 21 hours 28 min ago:
                Users paying to use the authenticated camera service means it
                would make money.  That seems obvious, so I don't understand
                what the point of confusion is.
       
          tjpnz wrote 1 day ago:
          We used to have something similar in NZ. Got removed eventually
          because of flashing.
       
          nicbou wrote 1 day ago:
          I heard that in France, they'd use postal office workers to verify
          people's IDs. It's a brilliant alternative to whatever we're doing in
          Germany.
       
            nicbou wrote 9 hours 28 min ago:
            Correction: I meant postal delivery workers. You don't have to
            leave your house.
       
            jrjeksjd8d wrote 1 day ago:
            We couldn't possibly employ people to solve the problem. Don't you
            know the post office is a waste of money?
       
            mrlnstk wrote 1 day ago:
            Don't we have PostIdent in Germany? At least I used it to open my
            bank account.
       
            FinnKuhn wrote 1 day ago:
            What are we doing in Germany?
            
            The options I have seen so far were a) using our digital IDs, which
            is very handy or b) having a bank verify my identity in person with
            my ID, which is also pretty good.
       
              nicbou wrote 1 day ago:
              These options are not available to recent immigrants, people with
              foreign documents and people without a registered address. I
              spent a lot of time working around those limitations.
       
            Zinu wrote 1 day ago:
            Isn’t that just like Postident in Germany?
       
              nicbou wrote 1 day ago:
              Not at all. Postident required going to the post office in person
              with your ID, and famously omitted a lot of foreign IDs and
              required an Anmeldung.
       
          exitb wrote 1 day ago:
          Or in general, a way to digitally sign a tamper-free video recoding
          made with a camera from a reputable manufacturer. Maybe a regular
          iPhone already has enough integrity checks and security contexts to
          achieve this.
       
            intrasight wrote 1 day ago:
            I'm almost certain that an iPhone camera can go that, and the
            reason that   Apple controls the full stack. It's necessary but not
            sufficient, since it's missing the identity maintenance when media
            leaves the device. Apple would have to place a cryptographically
            signed digital watermark into a global blockchain so that the
            analog hole can be closed. All devices that present that media back
            to a human would need to verify the contents provenance chain back
            to the initial capture device.
            
            There's nothing missing technology wise to achieving this but we,
            at this point, lack the collective will and the regulatory regime.
            I do foresee a future where this is the norm and that anything you
            listen to or watch you'll be able to trace back to the device that
            captured the data.
       
          UqWBcuFx6NV4r wrote 1 day ago:
          The bus that couldn’t slow down.
       
            Dylan16807 wrote 19 hours 47 min ago:
            What
       
        XorNot wrote 1 day ago:
        At this point "spotting AI" is IMO an irrelevant skill. It's something
        to be aware of but a bunch of the time I can't tell even with an
        extended look on static images, or if I'm on a phone and scrolling then
        nothing really tweaks automatically - perceptually the flaws blend
        exactly as you'd expect them to.
        
        So it's all context clues really - i.e. if the video tracking shot is
        sort of within the constraints of the models, plays to obvious agendas
        etc. then I might tweak to go looking for artifacts...but in the
        propaganda game? That's already game over. And we're all vulnerable to
        the ground shifting beneath us - i.e. how much power would there be if
        you had a model which could just slightly exceed those "well known"
        limitations?
        
        IMO the failure to implement strong distributed cryptography much
        earlier in the digital age is going to punish us hard for this - i.e.
        we haven't built a societal convention of verifying and authenticating
        digital communications amongst each other, and technology has finally
        caught up that it can fool our wetware now. It was needed well before
        this - e.g. the rise of the telephone scam and VOIP should've been when
        we figured out how to make sure people were in the habit of
        comprehending digital signatures and authentication. It isn't though,
        and now something much more dangerous is out there.
       
          drzaiusx11 wrote 1 day ago:
          Recently one of my friends got email hijacked and whatever entity it
          was seemingly used her past sent emails as a training corpus to
          construct some very convincing pleas for donations involving a dog
          rescue she's been operating for several years.
          
          It also included personal details only her closest friends and family
          would know. I assume this is being done at scale now. These are NOT
          Nigerian prince scams of yesteryear; this is something entirely
          different.
       
       
 (DIR) <- back to front page