[HN Gopher] Nvidia's Implicit Warping is a potentially powerful ...
___________________________________________________________________
Nvidia's Implicit Warping is a potentially powerful deepfake
technique
Author : Hard_Space
Score : 186 points
Date : 2022-10-17 11:35 UTC (11 hours ago)
(HTM) web link (metaphysic.ai)
(TXT) w3m dump (metaphysic.ai)
| varelse wrote:
| nashashmi wrote:
| We should stop calling it deep fake. And start calling it facial
| masking. And realize that this will now be kiddie tech in the
| future.
| [deleted]
| adamsmith143 wrote:
| It's already kiddie tech. Just google deepfake maker and you
| get dozens of hits for free services to make passable fakes.
| coldcode wrote:
| It only works if the background is continuous and fairly static
| - you can't easily synthesize a complex background if you have
| no reference material. You can of course extrapolate the
| visible background using other AI that has knowledge of real
| surfaces; however if the background has familiar content (like
| a sign) that is not visible in the reference frame(s), it's
| unlikely to fool anyone no matter how well the face is
| animated.
| porcc wrote:
| It must be trivial to use one of these approaches to generate
| a removable background and render the background with
| something else, like a static image, video, or 3d software
| malnourish wrote:
| > it's unlikely to fool anyone no matter how well the face is
| animated
|
| Call me a cynic, but I believe humans, myself included, are
| easy to fool. If the face looks good enough, plenty of people
| will fail to spot the Invisible Gorilla.
| [deleted]
| dumpsterdiver wrote:
| > And realize that this will now be kiddie tech in the future.
|
| Oh, absolutely. And we're not talking decades here. This tech
| will likely be ready to rock within a few election cycles
| Cthulhu_ wrote:
| I think with enough money / professionalism it's ready to
| rock already; a lot of the deepfakes I've seen so far have
| been done by "amateurs" (that is, individuals, not big
| companies). The last big companies' work has been the star
| wars prequels that used CGI, which was still in the uncanny
| valley imo.
| toss1 wrote:
| The "Deepfake Puppetry" term used in the article seemed to me
| like an improved term, an excellent and accurate description.
| It is using deepfake technology, and the "puppetry" term both
| describes accurately what it is doing -- animating a target
| image like a puppet -- and includes a vivid description to the
| nontechnical audience.
|
| "Facial Masking" is not bad either, quite accurate, but I'm not
| sure it provides as broad a description than "Deepfake
| Puppetry".
|
| Either way, it is important to get a term that is both accurate
| and resonating with the audience to ensure that the general
| public understands the serious potential damage for this
| technology (as with any powerful tool, can be used for good or
| evil).
| nashashmi wrote:
| Puppetry is a better term!
|
| It not only captures the face but also the body, and maybe
| the voice. Facial masking is more programmer friendly.
| wodenokoto wrote:
| Deep fake is a great term. Facial masking hides all the other
| uses of deep fakes. You can fake anything with these techniques
| based on deep learning.
|
| The term is just spot on.
| bsenftner wrote:
| Within VFX the terms are actor, object and background
| replacement. Rarely mentioned in any of the deep fake
| literature is the requirement for a non-facial skin skin tone
| correction for most deep fakes to begin to look natural.
| Everybody ignores the fact that you nor I have the same skin
| tone, and that needs to be touched up for any face
| replacement to start to look correct.
| Melatonic wrote:
| A good VFX artist can also do a hell of a better job than
| any of these deepfake automated platforms. And has been
| able to do that for a long, long time now. So while
| deepfakes becoming more available to everyone worries me
| the fact that we have not yet seen tons of well done face
| swap videos done by malicious actors paying a decent VFX
| artist makes me wonder how big this will really become.
|
| That being said for smaller countries with less internet
| access (and less education) this has already become a big
| problem (lots of cases in elections in Africa) so I think
| once again it comes down to education: we must inoculate
| people to bullshit like this by giving them the tools to
| spot it and future things like it.
| nashashmi wrote:
| That's exactly why there should be a distinction made. The
| term is too broad for a specific subset of deep fake
| technology that has matured far beyond its fun monster phase.
|
| Facemask and soundmask are good terms for a democratized
| warping of visual and audio data.
| c22 wrote:
| Wouldn't that be a voicemask?
| SubiculumCode wrote:
| Deep Fake tech needs to puncture the public's consciousness
| ASAP, diluting terms will be counterproductive
| smoldesu wrote:
| Those terms seem a little broad to me. If the news reported
| that Barack Obama's likeness was used in a "facemask",
| nobody would know what happened. If they said it was used
| for a "deepfake", most people _still_ probably wouldn 't
| know what happened, but at least they could look it up and
| get a general idea.
|
| Considering how facemasking and soundmasking have multiple
| contextual meanings, I think deepfake is the _perfect_
| word. If you need more precision, then you can use the
| phrase "visual deepfake" or "audio deepfake".
| dkonofalski wrote:
| I agree with this mostly in the sense that we need a
| specific word without any other context that defines what
| these are. I'm not sure that "deepfake" is that word but
| it definitely fits the bill based on what we're looking
| for. Deepfake has the advantage of being a word that's
| already in common usage that a good number of people
| recognize as "Oh, that's a fake done by a computer" so
| adding "audio" or "video" in front of it also fits the
| requirement of being easy to distinguish without being
| cluttered by other meanings or contexts. While we may be
| able to come up with a better word (in the same sense
| that a "tweet" is now unambiguous in context), I think it
| would be a misstep not to take the snowball that is
| "deepfake" and continue to use it in this context
| especially as a means to educate people on fake,
| computer-generated audio and video. People can barely
| keep up with tech buzzwords as it is so the fact that
| "deepfake" is already something in the public
| consciousness means _a lot_.
| moron4hire wrote:
| Some of these videos are particularly fascinating in how the head
| position is not exactly replicated but does look more like the
| body language I'd expect from those people. Like, try to note the
| sequence of facial expressions and head angle that Barack Obama
| goes through, then watch Angela Merkel and Winston Churchill.
| They're not 1-to-1.
| romanbaron wrote:
| Thanks for the post, I do suggest changing the font weight and
| color to make it more readable.
|
| Here's how it might look from the client side (computed style at
| Inspect->Style):
|
| .elementor-875 .elementor-element.elementor-element-230321ec {
| text-align: left; color: black; font-family: roboto; font-size:
| 21px; font-weight: 400; line-height: 1.6em; }
| mdrzn wrote:
| God what an awful way to hijack scrolling on such an interesting
| article.
| bob_paulson wrote:
| I'm there and it has been fixed apparently.
| nerdjon wrote:
| I was curious about this comment since I was not seeing this on
| Safari on Mac.
|
| I switched to chrome, and that is just horrible!
|
| I am curious why I am not seeing this on Safari (and apparently
| others on Firefox). I don't see any errors on Safari implying
| that some javascript is failing to load.
| spookie wrote:
| Devs were only using one browser to test their code.
| Nonetheless, if you are going to implement any mouse behavior
| hijacking, you should reconsider your life choices. Take a
| long walk down the park, have a cup of coffee, come back
| and... Don't do it.
| schwartzworld wrote:
| Yeah, quit your job over it.
| nerdjon wrote:
| oh I totally get that, but I would expect that they would
| be doing their testing on Chrome. I mean I get not liking
| chrome but ignoring it for testing is... a choice.
|
| I would have expected the worse behavior on Safari... not
| Chrome.
|
| Or am I just being nieve about frontend development? I
| really only do it for any personal projects so don't really
| have any insight into it professionally
| bee_rider wrote:
| I'm not sure what the issue is (using firefox and I don't
| have Chrome installed to test with) but I'm pretty sure
| the scroll hijack that people are complaining about, it
| is an intentional effect to make the site "fancy," which
| they only tested in Chrome, or something like that.
|
| One side perk of not using Chrome is that there's a
| correlation between only testing on Chrome and producing
| code that we're better off not interacting with.
| nerdjon wrote:
| If this is the intended effect, its bad one.
|
| I just find it interesting because I look at the apple
| product pages. While a bit janky they work as intended.
| But they are also applying that effect very deliberately
| and not to an article for some reason. So maybe that is
| the distinction.
| bee_rider wrote:
| I get it on Apple as well. It must be a janky
| implementation or something.
| pluc wrote:
| https://addons.mozilla.org/en-US/firefox/addon/noscript/
| SAI_Peregrinus wrote:
| Or https://addons.mozilla.org/en-US/firefox/addon/luminous/
| for more fine-grained control (block individual JS events,
| not all JS from a given page).
|
| Or uBlockOrigin in "medium mode" or higher incorporates JS
| blocking on a per-site basis.
| TechBro8615 wrote:
| On iOS I can usually open user-hostile articles like this in
| reader view, but even that is broken here.
| MattPalmer1086 wrote:
| I just gave up reading it. Good job devs!
| solardev wrote:
| Yeah. Read a sentence, tried to scroll, immediately left. Nope.
| adamhp wrote:
| Came here to say the same. It is so insanely aggravating.
| bfgoodrich wrote:
| I have never seen anyone praise or laud the benefits of
| scroll behavior overrides. It seems to universally be a
| negative if not hated, and always distracts from the content.
| Always.
|
| And it required extra work to yield this negative behavior!
|
| So how in the world does this end up happening? How could a
| team be so profoundly detached from reality? This site's
| behavior is so incredibly ill considered that I marvel that
| people worked on that, people approved it, people said
| publish, etc, without one person stepping in and asking what
| in the world they were doing.
| kierenj wrote:
| I didn't see any! Did they remove it?
| mkl wrote:
| No, it's still there in Chrome. It doesn't seem to be there
| in Firefox though.
| milliams wrote:
| It's there for me in Firefox. With a mouse if I spin the
| wheel then it starts an animated scroll. However, I cannot
| send any more scroll inputs until that animated scroll has
| finished. So with two scroll wheel spins in quick
| succession, the second one is ignored. This only seems to
| be the case with two spins in the same direction; reversing
| the direction while the animation is happening works as
| expected.
| josefresco wrote:
| The scrolling is still "floaty" in Firefox - sort of like
| operating a boat (not fun for a website)
| sph wrote:
| No issues on my Firefox 105 on Linux + uBlock Origin
| jandrese wrote:
| Same. I think uBlock is the answer.
| mkl wrote:
| Hm, I wonder why it's not for me. Possibly my Firefox is
| old, or possibly it isn't using the GPU (current driver
| issue).
| [deleted]
| GaylordTuring wrote:
| Totally agree. I got dizzy within the first couple of seconds
| scrolling down the page.
|
| Why do people think it's a good idea to circumvent the
| scrolling behavior that the developers of the browser probably
| have spent hundred of hours perfecting over the years?
| dr_zoidberg wrote:
| Funny, it's weird on both scrolling wheel and gesture on
| mouse pad... The scrolling wheel felt like the scrolling
| would get stuck at certain points, while the gesture was...
| Like a weird speed? It didn't feel the same on my laptop for
| both, and that's even weirder than I expected.
| solardev wrote:
| They fixed it! It was an unexpected change to one of the
| plugins they were using (wrote them to let them know).
| dcdc123 wrote:
| Whatever it is my blocking plugins remove it.
| mkl wrote:
| Yes. It was so frustratingly broken I had to stop reading and
| do something about it. Running this in the (Chrome) browser
| console fixes it for me:
| window.removeEventListener('wheel',
| getEventListeners(window)['wheel'][0].listener);
| MrScruff wrote:
| That fixed mouse wheel scrolling for me in Safari's JS
| console as well, thanks!
| mdrzn wrote:
| There should be a way to block all these wheel-hijacking via
| uBlock filters.
| pwdisswordfish9 wrote:
| metaphysic.ai##+js(aeld, wheel)
|
| Done.
| phpisthebest wrote:
| Even if there was today, Manifest V3 will take any ability
| to do that away
| nix0n wrote:
| Not in Firefox!
| zackmorris wrote:
| Is it just me, or does every single AI innovation lately seem to
| be.. pointless?
|
| Like, if I had the time and resources to work in AI, stuff like
| substituting faces and even stable diffusion would be about the
| last things I would ever work on.
|
| What would I start with? Something more like the MS Office
| software of the 1980s, only automated. I would have real-world
| applications that, wait for it, _perform work so I don 't have
| to_. AKA automation.
|
| TBH this stuff exhausts me to such a degree that I almost can't
| even follow it anymore. It's like living in a bizarro reality
| where nothing works anymore. A waking nightmare. A hellscape. Am
| I the only one who feels this way?
| keeran wrote:
| https://www.youtube.com/watch?v=7Cao0oy1CBg / https://lex.page/
| :)
| djur wrote:
| Exactly the opposite here. The recent progress in
| text/image/sound generation is the first thing that's actually
| made me interested in ML/AI. If I could restructure market
| priorities so all of the data scientists working on ad tech and
| recommendation engines and virtual assistants were working on
| this stuff instead I'd do it in an instant.
|
| I can also say from experience that extreme negative feelings
| like "other people are doing things I don't find interesting
| and it makes me exhausted and miserable" were, for me, a sign
| of clinical depression.
| im3w1l wrote:
| They aren't doing it because it's useful. They are doing it
| because it's easy and they take what they can get. Further,
| it's not farfetched to imagine that models that can understand
| and predict how objects and faces move is a stepping stone to
| more useful stuff.
| krashidov wrote:
| Well we DO have GitHub copilot which I have yet to use
| avian wrote:
| You're not the only one that feels that way. I wish I could add
| something constructive beyond this.
| capableweb wrote:
| > What would I start with? Something more like the MS Office
| software of the 1980s, only automated. I would have real-world
| applications that, wait for it, perform work so I don't have
| to. AKA automation.
|
| But that's what Stable Diffusion et al is: automation. It's
| just not automation for what you spend your time on, but it is
| automation for what a countless amount of people spend their
| time on, doing stock photos/drawings/clip art for endless
| amount of articles and other generic videos.
|
| I also think that the current "creative automation" comes from
| a perspective where many people have said for a long time that
| computers will of course be able to do boring, repeatable jobs
| like counting numbers and what not, but it will never be able
| to do the job of an "artist". But now, it seems like at least a
| subset of "artists" will be out of job unless they find a way
| to work with the new tools rather than against them.
| agumonkey wrote:
| But I get what zack is pointing at, it's automating the wrong
| stuff. It's like still having to work in a coal mine but
| thank god you don't have to take care of kids at night
| because someone invented autosnatcher.
|
| A lot of world is grinding in pain due to extremely bad
| software and the money and brain power keeps pouring
| everywhere but there. Well not entirely.. there was a lot of
| money thrown on these bad applications, but it evaporated due
| to software services companies subpar engineering.
| kjkjadksj wrote:
| I think it depends on if you are a designer or an artist. In
| my mind a designer works on a specification for a client and
| produces it, they are interchangeable, they dont put their
| signatures on the work because its not theirs, its the
| clients.
|
| Artists however wont have to worry about ai. Ai music already
| exists but people still support mainly real artists on
| streaming platforms and go to concerts, because provenance
| matters for artists and it doesn't for designers. Could an ai
| make a warhol? Probably not because what made warhols art
| popular was that he essentially worked outside of the
| training set and provided something previously unseen.
| Machine learning is bound to the training set. You can make
| generic corporate bathroom art for hotels or filling empty
| picture frames with it, but there will still be real artists
| and galleries and concerts and museums, because often times
| people value the provenance of the artist much more than even
| the work itself.
| jefftk wrote:
| _> Ai music already exists but people still support mainly
| real artists on streaming platforms and go to concerts,
| because provenance matters for artists_
|
| I don't think that's what's going on. The top pop singers
| are generally already singing things written by other
| people against accompaniment written by other people. I
| think by far the biggest reason few people are listening to
| AI music is it's just not as good as human music yet?
| Firmwarrior wrote:
| That's a good point
|
| Just like how "Calculator" used to be a job title for
| humans who manually performed calculations.
| dotnet00 wrote:
| These innovations aren't really pointless though? They may not
| be relevant to your interests, but they have practical
| applications and Stable Diffusion especially is already seeing
| a lot of interest from artists and people who need art but
| don't need something 'custom' enough to pay for a human. In
| both cases they are saving lots of 'basic' work that might have
| either been done by a human before or not done at all.
|
| Plus, these are the things we hear about because they look
| flashy. There is plenty of work behind the scenes on applying
| these innovations to more 'practical' matters like automation.
| giobox wrote:
| "640K ought to be enough for anybody." - etc etc etc.
|
| It's becoming increasingly clear that many of these techniques
| that for the past 10 years that have been derided "pointless"
| or novelty now have real applications.
|
| For one example, the automotive industry - ignore the hype on
| autonomy, computer vision is already delivering real benefits
| for active safety systems. I use github copilot every day - its
| not perfect but good enough to add value to my workflow.
| Apple's automated tagging of my photo library via computer
| vision allows me to discover hundreds of images of my life and
| family I'd forgotten all about. Stable diffusion can clearly
| replace an artist in some cases, ignoring the moral/ethical
| issues.
|
| I'm extremely excited for future of all this, frankly. The
| first step into such a new paradigm is always hard - people
| made the exact same "home computers are pointless" arguments in
| the late 70s/early 80s. I don't think anyone agrees with that
| anymore...
| ripe wrote:
| > I use github copilot every day - its not perfect but good
| enough to add value to my workflow.
|
| Could you please share an example? I saw a description of
| copilot but couldn't imagine what it might be useful for.
| wyldfire wrote:
| > I use github copilot every day - its not perfect but good
| enough to add value to my workflow.
|
| Wow I thought this copilot thing was just like a joke. People
| use it to write software? I'm really curious, now. Can you
| share examples of the code you're writing with it?
| jayd16 wrote:
| You just have a very slim vision of what work people do. The
| tools listed could be added to photo, special effects and film
| editing tools and provide value.
| nkingsy wrote:
| Vision and language are two of the pillars of human
| intelligence.
|
| Perfecting these two things opens whole universes of potential.
|
| Ask a robot to "do the dishes". It has to know what that means,
| find the dishes, find the sink, the on/off mechanism for the
| water. These are all language and vision tasks.
|
| The balance, navigation, picking/placing etc seems like a minor
| subroutine fed by vision metadata.
| russdill wrote:
| I like that the term chosen by AI researchers for many of
| these applications is "transformer". That way, I can look
| forward to a future where transformers do my dishes.
| dane-pgp wrote:
| "Autobots, wash up!"
| Arrath wrote:
| "What is my purpose?" "You wash the dishes." "Oh. My
| god."
| Melatonic wrote:
| We see these negative articles because they get clicks and
| attention but there is actually a ton of legit uses for this.
| In the VFX industry for example doing a face swap (sometimes
| called a replacement) is a fairly standard practice. Car crash
| scene where a union stunt performer is being paid to stand in
| for an actor? Instead of finding one that looks as similar as
| possible to the actual performer you can have any competent
| compositor just replace the head of the main performer to the
| stunt performer. Giving said compositor more tools to do this
| faster and easier is great - maybe now they can just replace a
| few individual frames and let the AI do the rest and then tweak
| the end result as needed.
| [deleted]
| verisimi wrote:
| > Is it just me, or does every single AI innovation lately seem
| to be.. pointless?
|
| ... but, I think you have really missed the point! Maybe you
| think government is here to help rather than to govern minds
| too! And that what is shown in the news is a good faith attempt
| to relay reality!
|
| If you are managing the world, companies, etc - perception is
| everything! If you are able to control what people perceive,
| and they receive everything via a screen, well, who cares about
| truth? The imagery, the ideas - that needs to convince... and
| that is pretty much all that you need to manage the masses.
| KaoruAoiShiho wrote:
| ? Nvidia is a graphics company so they are automating work so
| that they don't have to, that being making graphics.
| yarg wrote:
| I think it's a move in the right direction - but I've never
| been a fan of convolutional networks.
|
| There's significant potential value in networks that can
| regenerate their input after processing.
|
| It can be used to detect confusion - if a network can compress
| + decompress a piece of input, any significant differences can
| be detected and you can tell that there's something that the
| network does not understand.
|
| This sort of validation might be useful, if you don't want your
| self driving car to confidently attempt to pass under a trailer
| that it hasn't noticed - which tends to kill the driver.
| xchip wrote:
| Every new tech is a potential powerful dangerous tech, so it was
| fire and so it has been any new discovery.
|
| I saddens me to see here these sort of posts.
| nonrandomstring wrote:
| It _heartens_ me to see here these sort of posts!
|
| Because it means that finally, after decades of us being
| blinded by the wow-factor of technology, hackers and engineers
| - we who are responsible for creating the next wave of
| technology - are starting to ask grown-up questions about
| whether some things are such good ideas.
|
| That healthy scepticism doesn't have to mean pessimism, or
| Luddite rejection of "progress". That's enormous progress of a
| different kind - a shift from purely technical progress to a
| better balance of spiritual and social progress.
| neilv wrote:
| We had that before -- e.g., your nerds had grown up on s.f.
| allegories about society, as well as general education -- but
| it was obliterated by the frenzied influx during the dotcom
| gold rush.
|
| Since then, "the victors write the history books", including
| formative influences on children's thinking.
|
| It's encouraging if we manage to collectively scrape our way
| back from that, and have genuine concerns -- despite the
| noise of fashionable posturing, "influencer" conflicts of
| interest, etc.
| nonrandomstring wrote:
| Good points about the more rounded origins of hacker
| culture. I had read most of Arthur C Clarke by 9 years old
| and already had complex misgivings and ambivalence about
| "technology".
|
| > obliterated by the frenzied influx during the dotcom gold
| rush.
|
| Something changed in mid 1990s, something that was more
| than just the commercialisation and Eternal September.
| Perhaps it was millenial angst but a certain darkness and
| nihilism crept in as if William Gibson stopped trying to
| imagine future dystopias because the future had "caught up"
| and we were living in it. I think that's when things
| actually stopped progressing. After that, everything that
| had been written as a warning became a blueprint.
|
| > Since then, "the victors write the history books",
| including formative influences on children's thinking.
|
| 80s and 90s media moguls only owned the channels, and there
| was cursory regulation. When you own all the platforms, and
| devices that people use, and shape the content they see
| from school-age onwards it is no longer "media" but total
| mind control.
|
| > It's encouraging if we manage to collectively scrape our
| way back from that
|
| It may not be "back", but it may be somewhere better than
| here. Each generation finds it's own voice. I have some
| optimism in the kids since 2000 - it's like they're born
| knowing "The cake is a lie". They're just not quite sure
| yet what to do about it.
| bismuthcrystal wrote:
| But it doesn't read "dangerous" anywhere?
| zachthewf wrote:
| If it makes you feel better, this post is published on the blog
| of Metaphysic, a company that makes hyperrealistic avatars (aka
| deepfakes).
| BiteCode_dev wrote:
| Except an individual cannot affect the entire world from the
| confort of home with fire.
| ekianjo wrote:
| Nor a deep faked video once everyone understands we cant
| trust anything we see anymore
| [deleted]
| VectorLock wrote:
| "can't trust anything anymore" is arguably a worse place to
| end up.
| dementiapatent wrote:
| Are you familiar with forest fires?
| BiteCode_dev wrote:
| You start forest fires comfortably from your living room?
| What a wizard!
| sxg wrote:
| I generally agree with your point, but the difference is that
| the dangers of most new technology can be reasonably contained.
| Deep fakes will undermine people's trust in anything they see
| or hear, which is a severe negative. It's unclear how to
| minimize this kind of harm.
| phpisthebest wrote:
| You see that as a negative, to me I think people need to be
| less trust full of things they see on the internet...
|
| I grew up with the prevailing idea of "never trust what you
| see online", yet today many have deep and misplaced trust in
| online media, influencers, and personalities.
|
| I think a little more distrust is needed
| nonrandomstring wrote:
| > Deep fakes will undermine people's trust in anything they
| see or hear
|
| Not quite. Deep fakes will undermine people's trust in
| anything _digital_ they see or hear. It bodes ill for
| technology not for people per-se.
|
| In other words, those who should be most terrified by the
| implications of deep-fake technologies and AI are those most
| heavily invested in digital technology. I sense that in some
| ironic way, suddenly the "boot is on the other foot".
| dotnet00 wrote:
| The fact that people think that they should have any inherent
| trust of anything you see or hear (especially online) in the
| first place is a severe negative. Misinformation has always
| existed and always will exist, the solution is education and
| not knee-capping development. This same attitude would've
| made the internet a far less open place than it has been if
| it had been applied to it at its start.
| fleddr wrote:
| "The fact that people think that they should have any
| inherent trust of anything you see or hear (especially
| online) in the first place is a severe negative".
|
| This is inherent of our species. The idea that you should
| distrust anything you see or hear is an alien concept and
| simply not pragmatic.
|
| "Misinformation has always existed and always will exist"
|
| False equivalence. The online situation is brand new. Any
| citizen able to spread massive amounts of fake news to lots
| of people on the cheap is a brand new capability.
|
| "the solution is education"
|
| No, it isn't. Every study shows that highly educated people
| are also gullible enough to be manipulated with fake news.
| Which makes total sense, as absolutely nobody has the time
| to fact-check and do a deep background check on the 5
| zillion pieces of information they see on a given day.
| memling wrote:
| > I saddens me to see here these sort of posts.
|
| We should actually be much more proactive about technology: it
| is not an unalloyed good, and an unwillingness to consider its
| downsides leads to naive designs. Even leaving bad actors aside
| (which I don't recommend doing), what works well in a group of
| 10,000 users may not scale to one billion users. Where one
| scale has no effect on social cohesion, another scale may have
| a tremendously deleterious effect.
|
| We've been doing these experiments in search and social for
| years now. Taking lessons from that and applying it to the next
| great wave of AI innovations seems like a Good Idea to me.
| jnwatson wrote:
| It is a fool's errand to try to evaluate the net societal
| impact of a particular technology, even with perfect
| predictive power.
|
| Looking back, what technology would you have retroactively
| stopped? Tetraethyl lead? Perhaps. Nuclear? Cable TV? ANNs?
| The Internet? Drones?
|
| Also, the financial incentives aren't aligned. Certainly it
| makes sense to hold back technology to avoid embarrassment if
| you're a trillion dollar company; you have more to lose than
| gain. However, if you're a scrappy startup, it makes way more
| sense to roll the dice.
| memling wrote:
| > It is a fool's errand to try to evaluate the net societal
| impact of a particular technology, even with perfect
| predictive power.
|
| This really isn't true. We regularly do cost-benefit
| analyses for business; we don't skip them because they're
| not perfect predictors. We do market analysis and all kinds
| of customer deep dives to perfect UI/UX and customer
| response. We've created targeted dopamine-delivery services
| that are continually refined to maximize impact.
|
| All of this implies a certain kind of ability to evaluate.
| Looking at tradeoffs and potential uses of technology is
| well within our abilities, and we should do it. We should
| be more skeptical of human nature, look harder at the
| extremes and edge cases, and work towards mitigating the
| risks. Will it be perfect? No. Will it be helpful? Yes.
| goatlover wrote:
| We have bans on chemical and biological weapons, nuclear
| weapons are carefully monitored and there are considerable
| efforts to prevent proliferation. In hindsight, I think
| people today would have prevented the nuclear arms race, as
| it puts civilization at risk, and there is no end date for
| that.
| wussboy wrote:
| Bang on. To add fuel to the fire (heh), the fire mentioned by
| parent wasn't perceived as deadly and then suddenly was safe.
| It was and is deadly unless the precautions our society has
| spent thousands of years developing are carefully followed.
| Exactly what we must do with technology. Except we don't have
| thousands of years.
| JohnJamesRambo wrote:
| Fire cooked our food, what are the benefits of deep fakes?
| visarga wrote:
| Low bitrate avatars, Zoom-ing in your PJs, virtual cosplay,
| video production (low budget for SFX), video anonymity.
| trention wrote:
| All of this either doesn't matter or barely matters.
| nashashmi wrote:
| Low budget movie acting.
| thedorkknight wrote:
| After reading the article, you're overstating the "dangerous"
| part here. It's far from a fear mongering blog. They mention
| it's potential for misuse - no need to be saddened by that
| lm28469 wrote:
| > Every new tech is a potential powerful dangerous tech, so it
| was fire and so it has been any new discovery.
|
| Yet the toothpick is a little bit less dangerous than the
| atomic bomb.
|
| We shouldn't blindly accept every new "tech" as "progress", the
| fact that we can build it doesn't mean it'll be beneficial for
| us
| Der_Einzige wrote:
| Dental issues have killed more than atomic bombs have...
| thedorkknight wrote:
| Everyone has teeth. Not everyone has had direct exposure to
| nuclear explosions.
| lynndotpy wrote:
| We tell children to beware fire, we regulate it in many public
| areas, we put fire extinguishers all throughout buildings, and
| even dedicate entire buildings for fire-fighting teams.
|
| It's very valuable to discuss the potential dangers of new
| technologies and how we might mitigate them.
| nh23423fefe wrote:
| Seems like we did that because fires happened in cities and
| did real damage. What damage has AI done?
|
| > AI might put false ideas in peoples minds
|
| Like speech or writing or media?
|
| > no no not the ideas, its the medium, the delivery method.
| you see the fakes will trick people into thinking the lies
| are real.
|
| oh that's it? just deception + scale.
| [deleted]
| propercoil wrote:
| The UI on this website is horrid.
| [deleted]
| sys32768 wrote:
| Not sure I want to live in a world where billions of people still
| believe what they see when these deepfake technologies mature.
|
| Not sure I will care either, because by that time I will have
| been locked in my living room for weeks playing my customized VR
| role playing game where Han Solo and I wreak havoc in the Far Cry
| 5 universe with my Dovakin powers while being chased by a all-
| female bounty hunter crew commanded by a 1982 Phoebe Cates.
| nomel wrote:
| This is why I think it's irresponsible to _not_ give the public
| access to these early models. I think a slow, increasing,
| exposure to the absurd /false will work out much better than
| the isolation desired by the orgs who are trying to "protect"
| us, because that isolation will end up being temporary,
| resulting in a step rather than a ramp.
| _0ffh wrote:
| Right, the earlier the general public is confronted with
| these technologies, the better!
| usednet wrote:
| There needs to be far more criticism of the increasingly
| closed development of these models in what is essentially
| becoming an AI arms race. Like Sam Altman and his team at
| OpenAI (more like ClosedAI) who were more aware of this than
| anybody yet sold themselves out for the highest bidder. What
| a joke of a "non-profit."
| BoorishBears wrote:
| It's so bizarre to me how much commentary I see to this effect,
| do people forget the times we live in?
|
| You can fool more people with a fake article and $1000 in
| Twitter click farm spend than you could ever dream to with a
| deepfake.
|
| Like we've already seen entire elections undermined with basic
| internet trolling, the problem is already here, but somehow
| people are overlooking that and fixated on the most fanciful
| version of it?
| humanistbot wrote:
| Remember that video that showed Nancy Pelosi supposedly
| 'drunk', but it was just slowed down and pitch shifted? It
| was so easy to tell it was faked, but it still spread like
| wildfire.
| parker_mountain wrote:
| > You can fool more people with a fake article and $1000 in
| Twitter click farm spend than you could ever dream to with a
| deepfake.
|
| Strong disagree. Viral content is way more effective.
|
| And also, imagine what you can do with $1000 in ad spend and
| a deepfake.
| BoorishBears wrote:
| You're not disagreeing. I didn't think I needed to connect
| the dots that you're paying the click farm to go viral.
|
| > And also, imagine what you can do with $1000 in ad spend
| and a deepfake.
|
| Way less. A deepfake _detracts_ from your mission. If you
| deepfake Joe Biden saying "I'm going to destroy this
| country as instructed by my masters" and you actually gain
| traction, suddenly you've thrust your subversion into the
| spotlight and start reaching people adversarial to your
| goal.
|
| News sources are going to start digging to find where the
| clip came from, the White House is going to respond, the
| video is going to start getting analyzed to death.
|
| -
|
| If instead you register a dime a dozen domain like
| "america4evernews.com" and write a crackpot article about
| how Joe Biden is actually working for our enemies and you
| have it on good authority that he's being controlled by
| puppet masters who want him to destroy America, you'll find
| an army of people who _want_ to believe.
|
| They don't need sources, they'll avoid sharing it with the
| "sheeple who believe MSM" and stick to their echo chambers.
| It's a strictly better outcome.
|
| People don't seem to understand that modern misinformation
| is not about fooling everyone, it's about fooling the right
| people. You're goal isn't to reach every set of eyeballs,
| it's to reach the eyeballs that are easily fooled so that
| they come in conflict with those who are not, thus
| entrenching both sides against each other.
|
| -
|
| In some ways it's like a virus: if it's too strong it draws
| attention to itself early and can't spread easily. If
| instead it causes subtle symptoms that compound, it can
| spread very widely before it's even noticed, and use that
| infected base to expand even further.
| BizarroLand wrote:
| Plus, for some reason humans as a group tend to prefer the
| fake perfect over the real good. Autotune perfects bad
| playing and singing, we use audience brain responses to tune
| the editing of movies, CGI over practical effects, clickbait
| over quality titles, zinging 1 liners more important than the
| novel they describe.
|
| One the one hand, you can't fault the masses for buying the
| goods they're sold, and on the other hand you can't fault the
| sellers for maximizing the apparent quality of their
| products, but somewhere inbetween all of that broad
| mindedness and beauty there has to be something at fault for
| creating a world where actual human beings are not good
| enough to participate as equals and the playing field is pay
| to win on a scale that boggles the imagination.
| authpor wrote:
| I first encountered that idea as "supernormality"
|
| https://en.wikipedia.org/wiki/Supernormal_stimulus
| callalex wrote:
| My thought process: if spreading disinformation without
| evidence is so successful right now, imagine how much worse
| things could be in the future when people are spreading
| disinformation with "evidence".
| BoorishBears wrote:
| I have a comment below about it, evidence is actually
| contrary to the goal.
|
| If you bring evidence you're introducing a place for a
| counter attack to apply leverage.
|
| If instead you make completely baseless claims on obvious
| false pretenses, you've actually made things more difficult
| to counter because only trivial counterproofs exist, which
| have to be dismissed to believe the false claim in the
| first place.
|
| -
|
| Take COVID vaccine deaths for example. Imagine I baselessly
| say that the medical field is lying and 50% of COVID deaths
| are actually vaccine complications.
|
| For someone to believe that, they must completely distrust
| any official numbers on COVID deaths... so once they've
| fallen for the lie, how do you convince them otherwise? The
| only counterproofs are the trivial to find sources of data
| that they _already_ had to dismiss to believe me in the
| first place. Suddenly I 've implanted a self-enforcing lie
| that entrenches its believers against anyone who isn't in
| their echo chamber.
|
| The root of all this is straightfoward enough: _there is
| nothing stronger than requiring someone to disbelieve their
| own beliefs to counter your disinformation_. If you add a
| deepfake, you 've added something outside of their belief
| system to attack, so you're weakening the attempt. People
| simply do not like to be wrong about things they think
| they've figured out.
| LawTalkingGuy wrote:
| If you used a Biden deepfake you'd more likely want it to
| be of him tripping on something than admitting allegiance
| to the lizards.
|
| > Imagine I [...] For someone to believe that, they must
| completely distrust any [data]
|
| Do you think this is this like a 419 scam where saying
| something a bit outrageous sorts out the gullible and
| bypasses the wary or do you think that your claim can
| somehow hijack a credulous person long enough so that
| they make that mental recategorization of the data
| sources and are stuck?
| BoorishBears wrote:
| The former gives you a springboard for the latter
|
| The people falling for the obvious nonsense are self
| filtering just like people falling for obvious 419 scams.
|
| But as you grow the base that believes in your
| disinformation, you gain real people who are fully
| convinced of these things, and the effect of that is a
| force multiplier.
|
| People talk, and if people's self-held beliefs are the
| strongest reinforcement, the second strongest is those we
| surround ourselves with. If someone falls for this stuff
| and starts talking to the spouse, now someone close to
| them is pushing this agenda. People can start nudging
| their friends to be more skeptical.
|
| It's not going to be a 1:1 conversion: a lot of people
| close to them will push back, but remember, this is all
| based on absolutely no proof, so it can twist itself to
| fit any box. People can moderate the story to avoid
| pushback: "Oh you know I'm not an anti-vaxxer... but I
| did heard that vaccine has a lot of complications", and
| maybe they connect that to a real article about a
| myocarditis case, and now maybe they're not pushing my
| original lie of "50% of deaths", but I've planted an
| suggestion in a rather moderate person using a chain of
| gullible people.
|
| And something especially effective about this is the fact
| that, while the most brazen aspects of disinformation hit
| less intelligent people hardest
| (https://news.ku.edu/2020/04/28/study-shows-vulnerable-
| popula...)
|
| Once you start to make inroads with increasingly better
| educated groups via the network effect, they tend to not
| want to believe they're wrong. Highly intelligent people
| can be _more_ susceptible to some aspects of
| disinformation in this way:
| https://www.theguardian.com/books/2019/apr/01/why-smart-
| peop...
|
| That lends itself to increasingly authoritative figures
| becoming deeply entrenched in those campaigns, leading to
| things like... https://wapp.capitol.tn.gov/apps/BillInfo/
| Default.aspx?BillN...
|
| -
|
| Overall I've said this before, everyone is dreaming of AI
| dystopias rooted in things like deepfakes putting us in a
| post-truth era, or AI gaining sentience and deciding it
| doesn't need humans...
|
| The reality is so much more boring, yet already in
| progress. We're starting to embed blackbox ML models
| trained on biased or flawed data into the root of
| society.
|
| ML already dictates what a large number of people are
| exposed to via social media. ML is starting to work its
| way into crime fighting. We gate access to services
| behind ML models that are allowed to just deny us access.
| How long before ML is allowed to start messing with
| credit ratings?
|
| And yet getting models to "explain" their reasoning is a
| field of study that's completely behind all of these
| things. You can remove race from a dataset and ML will
| still gladly start codifying race into its decisions via
| proxies like zipcodes, after all it has no concept of
| morality or equality: it's just a giant shredder for
| data.
|
| Right now a glorified bag of linear regressions is posing
| much more of an effective danger than T1000s ever will.
| But since that's not as captivating instead we see a ton
| of gnashing of teeth about the ethics of general
| intelligence, or how we need to regulate the ability to
| make fake videos, rather than boring things like "let's
| restrict ML from as many institutional frameworks as
| possible"
| janalsncm wrote:
| > But since that's not as captivating instead we see a
| ton of gnashing of teeth about the ethics of general
| intelligence, or how we need to regulate the ability to
| make fake videos, rather than boring things like "let's
| restrict ML from as many institutional frameworks as
| possible"
|
| It's not only not captivating, it's downright
| inconvenient. If I'm at a TED talk I don't want to hear
| about how ML models (some of which my company has
| deployed) are causing real world harms __right now__
| through automation and black box discrimination. If you
| read Nick Bostrom's Superintelligence it spends laughably
| little time pondering the fact that AI will likely lead
| to a world of serfs and Trillionaires.
|
| No, people want to hear about how we might get
| Terminator/Skynet in 30 years if we're not careful. Note
| that these problems are already complicated by ill-
| defined concepts like sentience, consciousness and
| intelligence, the definitions of which suck all of the
| oxygen out of the room before practical real-world harms
| can be discussed.
| authpor wrote:
| however the story of withholding technology, education, etc,
| for profit (of some kind) is much older than this particular
| latest development.
|
| if you ask me, this 'tradition' is the essence of imperialism
| (or just one amongs other techniques necessary to have an
| empire)
___________________________________________________________________
(page generated 2022-10-17 23:01 UTC)