[HN Gopher] YouTube now requires to label their realistic-lookin...
       ___________________________________________________________________
        
       YouTube now requires to label their realistic-looking videos made
       using AI
        
       Author : marban
       Score  : 421 points
       Date   : 2024-03-18 16:19 UTC (6 hours ago)
        
 (HTM) web link (blog.google)
 (TXT) w3m dump (blog.google)
        
       | sigmoid10 wrote:
       | >Some examples of content that require disclosure include: [...]
       | Generating realistic scenes: Showing a realistic depiction of
       | fictional major events, like a tornado moving toward a real town.
       | 
       | This sounds like every thumbnail on youtube these days. It's good
       | that this is not limited to AI, but it also means this will be a
       | nightmare to police.
        
         | nosvince wrote:
         | Exactly, and many have done exactly the same kind of video
         | using VFX. What's the difference? These kind of reactions
         | remind me of the stories of the backlash following the
         | introduction of calculators in schools...
        
           | dylan604 wrote:
           | I'm sorry, but using a calculator to get around having to
           | learn arithmetic is not even close being the same thing.
           | Prove to me that you can do basic arithmetic, and then we can
           | move on to using calculators for the more complex stuff where
           | if you had to could at least come to the same value as the
           | calculator.
           | 
           | People using VFX aren't trying to create images in likeness
           | of another existing person to get people to buy crypto or
           | other scams. Comparing the two is disingenuous at best.
        
           | GolDDranks wrote:
           | > What's the difference?
           | 
           | The ease and lack of skill required. That brings whole
           | another set of implications.
        
           | DylanDmitri wrote:
           | Using VFX for realistic scenes is more involved. VFX requires
           | more expertise to do convincingly and realistically, in the
           | thousands of hours of experience. More involved scenes
           | require multiple professionals. The tooling and assets costs
           | more. An inexperienced person, in a hundred hours of effort,
           | can put out 10ish realistic scenes with leading edge AI
           | tools, when previously they could do 0.
           | 
           | This is like regulating handguns differently from compound
           | bows. Both are lethal weapons, but the bow requires hours of
           | training to use effectively, and is more difficult to carry
           | discreetly. The combination of ease, convenience, and
           | accessibility necessitates new regulation.
           | 
           | This being said, AI for video is an incredibly promising
           | technology, and I look forward to watching the TV shows and
           | movies generated with AI-powered tooling.
        
             | nomel wrote:
             | > Using VFX for realistic scenes is more involved.
             | 
             | This really depends on what you're doing. There are some
             | great Cinema 4d plugins out there. As the plethora of
             | YouTube tutorials out there clearly demonstrate, multiple
             | professionals, and vast experience, are _not_ required for
             | some of the things they have listed. Tooling and assets
             | costs are 0, in the high seas.
             | 
             | Until Sora is widely available, or the open source models
             | catch up, _at this moment_ it 's easier to use something
             | like Cinema 4d than AI.
        
             | mazlix wrote:
             | What if i use an LLM powered AI to operate VFX software to
             | generate a realistic looking scene? ;)
        
             | alickz wrote:
             | What if new AI tools negate the thousands of hours
             | experience to generate realistic VFX scenes, so now
             | realistic scenes can be made by both non-AI VFX experts and
             | AI-assisted VFX laymen?
             | 
             | Do we make all usages of VFX now require a warning, just in
             | case the VFX was generated by AI?
             | 
             | I think this is different to the bow v gun metaphor as I
             | can tell an arrow from a bullet, but I can foresee a future
             | where no human could tell the difference between AI-
             | assisted and non-AI-assisted VFX / art
             | 
             | I believe this is evidenced by the fact that people can go
             | around accusing any art piece of being AI art and the
             | burden of proving them wrong falls on the artist.
             | Essentially I believe we are rapidly approaching the point
             | of it not mattering if someone uses AI in their art because
             | people won't be able to tell anyway
        
       | _trampeltier wrote:
       | So beauty filters are ok, but what's the true difference between
       | a strong beauty filter and a face change.
        
         | vkou wrote:
         | Society is simply revisiting a conversation about doctored
         | photographs, videos, and audio recordings.
         | 
         | The last word on this subject was not written in the 1920s,
         | it's good to revisit old assumptions every century or so, when
         | new forms of media and media manipulation become developed.
         | 
         | The first pass on it is unlikely to be the best, or even the
         | last one.
        
           | diggan wrote:
           | > The first pass on it is unlikely to be the best, or even
           | the last one.
           | 
           | And just like a prototype that would never end up in
           | production, we'll remain with the first implementation we
           | could think of _cough_ copyright _cough_
        
             | callalex wrote:
             | This is a very inaccurate depiction of copyright. It
             | originally only lasted around 20 years with the option to
             | double it. Then it was reformed over and over across
             | history to create the monster we have today.
        
             | vkou wrote:
             | Copyright has been revised, overhauled and redefined
             | multiple times over the past few centuries. You couldn't
             | have picked a worse example.
             | 
             | Here's an obvious question that came up (and was resolved
             | differently in different jurisdictions) - can photographs
             | be copyrighted? What about photographs made in public? Of a
             | market street? Of the Eifel tower? Of street art? Can an
             | artist forbid photography of their art? An actor of their
             | performance? A celebrity of their likeness? A private
             | individual of their face? Does the purpose for which the
             | photograph will be used matter?
             | 
             | At what point does a photograph have sufficient creative
             | input to be copyrightable? Is pressing a button on a camera
             | creative input? What about a machine that presses that
             | button? Only humans can create copyrightable works under
             | most jurisdictions. Is arranging the scene to be
             | photographed a creative input? Can I arrange a scene just
             | like yours and take a photo of it? Am I violating your
             | copyright by doing it?
             | 
             | There's tens of thousands of pages of law and legal
             | precedent that answer that question. As a conversation, it
             | went on for decades, with no simple first-version solution
             | sticking.
        
           | ravenstine wrote:
           | > Society is simply revisiting a conversation about doctored
           | photographs, videos, and audio recordings.
           | 
           | society in this case = media companies
        
         | dylan604 wrote:
         | Doctoring an image of a willing model/actor is not the same
         | thing as 100% made up attempting to look like a willing
         | model/actor
        
       | floatrock wrote:
       | "made using AI" is such a fuzzy all-encompassing term that this
       | feels like it will turn into another California Prop 65 warning
       | scenario. Pretty soon every video will have a disclaimer like:
       | 
       | WARNING: This video contains content known to the State of Google
       | to be generated by AI algorithms and/or tools.
       | 
       | Ok, beauty face filters are not included. How about character
       | motion animations? How detailed does the after effects plugin
       | need to be before it's considered AI? Can we generate just a
       | background? Just a minor subject in the foreground? Or is it like
       | pornography, where we'll recognize it once we see it?
       | 
       | I fear AI tools will soon become so embedded in normal workflows
       | that it's going to become a question of "how much" not
       | "contains", and "how much" is such a blurry, subjective line that
       | it's going to make any binary disclaimer meaningless.
        
         | wwalexander wrote:
         | You might be interested in Adobe's "Content Credentials" [1]
         | which seemingly aim to clarify exactly what processing has
         | applied to an image. I don't like the idea of Adobe being the
         | gatekeepers of image-fidelity-verification but the idea is
         | intriguing and it seems like we'll need something like this
         | (that camera makers sign onto) to deal with AI.
         | 
         | EDIT: I think these should also include whatever built-in
         | processing is applied to the raw sensor data within the camera
         | itself.
         | 
         | [1] https://helpx.adobe.com/creative-cloud/help/content-
         | credenti...
        
       | fortran77 wrote:
       | We need to fix the title. It's not just AI -- it's any realistic
       | scene generated by VFX, animation, or AI. The title of the
       | blogpost is "How we're helping creators disclose altered or
       | synthetic content" -- it shouldn't say AI on the Hacker News
       | title.
       | 
       | > Generating realistic scenes: Showing a realistic depiction of
       | fictional major events, like a tornado moving toward a real town.
       | 
       | Does the Wizard of Oz tornado scene need a warning now? [0] (Of
       | course not, but it may be hard to draw the line in some places.)
       | 
       | [0] https://www.grunge.com/486387/heres-how-the-tornado-scene-
       | in...
        
         | wnc3141 wrote:
         | yes but that's very hard and doesn't scale, (can't be cheaply
         | shot from multiple angles etc.)
        
         | pixelcloud wrote:
         | They don't make a distinction between AI generated and VFX.
         | This is contained within the linked article.
        
       | jquery wrote:
       | This is great. Really well-thought out policy, in my opinion.
       | Sure, some people will try to get around the restrictions,
       | especially nefarious actors, but the more popular the channel,
       | the faster they'll get caught. It also doesn't try to distinguish
       | between regular special effects and AI-generated special effects,
       | which is wise.
        
         | m463 wrote:
         | I don't know, sometimes rules need ambiguity, like "high crimes
         | and misdemeanors", but other times the little guys lose, like
         | civil asset forfeiture.
        
       | dotnet00 wrote:
       | Without enforceability it'll go the same way as it has on Pixiv,
       | the good actors will properly label their AI utilizing work,
       | while the bad actors will continue to lie to try to maximize
       | their audience until they get caught, then rinse and repeat. Kind
       | of like crypto-scammers.
       | 
       | For context, Pixiv had to deal with a massive wave of AI content
       | being dumped onto the site by wannabe artists basically right as
       | the initial diffusion models became accessible. They responded by
       | making 'AI-generated' a checkbox to go with the options to mark
       | NSFW and adding an option for users to disable AI-generated
       | content from being recommended to them. Then, after an incident
       | of someone using their Patreon style service to pretend to be a
       | popular artist, selling commissions generated by AI to copy the
       | artist's style, they banned AI-generated content from being
       | offered through that service.
        
         | jtriangle wrote:
         | Also remains to be seen if labeling your content as containing
         | AI-generated work will help or hurt you in your viewership.
         | 
         | My guess is that youtube is going to downrank this content, and
         | may be trying to crowdsource training data in order to do this
         | automatically.
        
           | dotnet00 wrote:
           | I think that for now they're just going to use it as a means
           | of figuring out what kind of AI-involved content people are
           | ok with and what kind they react negatively to.
           | 
           | Personally, I've developed a strong aversion to content that
           | is primarily done by AI with very little human effort on top.
           | After how things went with Pixiv I've come to hold the belief
           | that our societies don't help people develop 'cultural
           | maturity'. People want the clout/respect of being a popular
           | artist/creator, without having to go through the journey they
           | all go through which leads to them becoming popular. It's
           | like wanting to use the title of Doctor without putting in
           | the effort to earn a doctorate, the difference just being
           | that we do have a culture of thinking that it's bad to do
           | that.
        
         | dotancohen wrote:
         | I think that the idea is mostly to dictate culture. And I like
         | the idea, not only for preventing fraud. Ever since the first
         | Starship launches, the reality looks more incredible than the
         | fiction. Go look up the SN-8 landing video, tell me that does
         | not look generated. I just want to know what is real and what
         | is generated, by AI or not.
         | 
         | I think that this policy is not perfect, but it is a step in
         | the right direction.
        
         | russdill wrote:
         | I think one of the bigger issues will be false positives.
         | You'll do an upload, and youtube will take it down claiming
         | that some element was AI generated. You can appeal, but it'll
         | get automatically rejected. So you have to rework your video
         | and figure out what it thought might be AI generated and re-
         | upload.
        
       | strangescript wrote:
       | This a pointless nearly unenforceable rule to make people feel
       | better. Sure, if you generate something that seems like a real
       | event that is provably false you can be caught, but anything
       | mundane is not enforceable. Once models reach something like Sora
       | 1.5 level of ability, we are kind of doomed on knowing whats real
       | in video.
        
         | gloosx wrote:
         | naah, there still will be certain patterns and they will be
         | recognisable.
         | 
         | once something sora 1.5 level of ability is there - definitely
         | reverse-sora model which can recognise ai-made videos should be
         | possible to train as well
        
         | supertrope wrote:
         | >This a pointless nearly unenforceable rule to make people feel
         | better.
         | 
         | Pretty much. If Google says "Swiper no swiping" they can point
         | at their policy when lobbying against regulations or pushing
         | back against criticism.
         | 
         | Before surveillance capitalism became the norm, web services
         | told users to not share personal information, and to not trust
         | other users they had not met in real life.
        
       | nextworddev wrote:
       | .
        
         | carlossouza wrote:
         | Exactly what I thought.
         | 
         | Let's see how long it will take them to collect enough data and
         | train a model to distinguish AI-generated from user-generated
         | videos.
        
       | the_duke wrote:
       | They don't bother to mention it, but this is actually to comply
       | with the the new EU AI act.
       | 
       | > Providers will also have to ensure that AI-generated content is
       | identifiable. Besides, AI-generated text published with the
       | purpose to inform the public on matters of public interest must
       | be labelled as artificially generated. This also applies to audio
       | and video content constituting deep fakes
       | 
       | https://digital-strategy.ec.europa.eu/en/policies/regulatory....
       | 
       | Some discussion here:
       | https://news.ycombinator.com/item?id=39746669
        
         | machinekob wrote:
         | Ofc. they don't mention it for big tech companies EU = Evil
        
           | duringmath wrote:
           | You'd think they're evil too if they let a bunch of middlemen
           | and parasitic companies dictate how the software you invested
           | untold sums and hours developing and marketing should work.
        
         | ajross wrote:
         | Seems like this is sort of a manufactured argument. I mean,
         | should every product everywhere have to cite every regulation
         | it complies with? Your ibuprofen bottle doesn't bother to cite
         | the FDA rules under which it was tested. Your car doesn't list
         | the DOT as the reason it's got ABS brakes.
         | 
         | The EU made a rule. YouTube complied. That changes the user
         | experience. They documented it.
        
           | contravariant wrote:
           | Doesn't seem _that_ out of place for a blog post on the exact
           | change they made to comply though.
           | 
           | I mean you'd expect a pharmaceutical company to mention which
           | rules they comply with at some point, even if not on the
           | actual product (though in the case of medicine, probably also
           | on the actual product).
        
           | hnlmorg wrote:
           | If the contents of my ibuprofen bottle changed due to
           | regulatory changes, then it wouldn't be weird to have that
           | cited at all.
        
           | LudwigNagasena wrote:
           | Certain goods sold in the EU are required to have CE marking
           | to affirm that they satisfy EU regulations.
        
             | nlehuen wrote:
             | +1 in France at least, food products must not suggest that
             | mandatory properties like "preservative free" is unique.
             | When they advertise this on the package, they must disclose
             | it's per regulation. Source:
             | https://www.economie.gouv.fr/particuliers/denrees-
             | alimentair...
        
         | supriyo-biswas wrote:
         | India is considering very similar laws as well (though not
         | implemented at this time)[1], so it's not just the EU.
         | 
         | Also, if every applicable regulation had to be mentioned, it'd
         | be a very long list.
         | 
         | [1]
         | https://epaper.telegraphindia.com/imageview/464914/53928423/...
        
           | hoffs wrote:
           | Considering is different from actually something that should
           | be enforced
        
         | bgirard wrote:
         | I wouldn't be surprised if this ends up like prop 65 cancer
         | warnings, or cookie banners. The intention might be to separate
         | believable but low quality hallucinated AI content spam from
         | high quality manual content. But it will backfire like prop 65.
         | You'll see notices everywhere because increasingly AI will be
         | used in all parts of the content creation pipeline.
         | 
         | I see YouTube's own guidelines in the article and they seem
         | reasonable. But I think over time the line will move, be
         | unclear and we'll end up like prop 65 anyways.
        
           | bombcar wrote:
           | This is exactly what will happen, just like with cookie
           | warnings, etc.
           | 
           | To be effective, warnings like this have to be MANDATED on
           | the item in question, and FORBIDDEN when not present.
           | 
           | Otherwise you stick a prop 65 "may contain" warning on
           | everything, and it's pointless.
           | 
           | (This post may have been generated by AI; this notice in
           | compliance with AI notification complications.)
        
             | Jensson wrote:
             | The "sponsored content" tag on youtube seems to work very
             | well though. Most content creators don't want to label
             | their videos sponsored unless they are, I assume the same
             | goes for AI generated content flags. Why would a manual
             | content creator want to add that?
             | 
             | > This post may have been generated by AI
             | 
             | I doubt "may" is enough.
        
               | nemomarx wrote:
               | I think the concern is people might use the label out of
               | caution if Adobe has some automatic AI enhancement in
               | your video editor or whatever?
        
               | wongarsu wrote:
               | That would be either poor understanding or poor
               | enforcement of the rule, since they specifically list
               | stuff special effects, beauty filters etc as allowed.
               | 
               | A more plausible scenario would be if you aren't sure if
               | all your stock footage is real. Though with youtube
               | creators being one of the biggest groups of customers for
               | stock footage I expect most providers will put very clear
               | labeling in place.
        
               | ehsankia wrote:
               | That's a much clearer line though, it's much simpler to
               | know if you were paid to create this content or not. Use
               | of AI isn't, especially if it's deep in some tool you
               | used.
               | 
               | Does blurring part of the image with Photoshop count?
               | What if Photoshop used AI behind the scene for whatever
               | filter you applied? What about some video editor feature
               | that helps with audio/video synchronization or background
               | removal?
        
               | ryandrake wrote:
               | Maybe this could motivate toolmakers to label their own
               | products as "Uses AI" or "AI Free" allowing content
               | creators verify their entire toolchain to be AI Free.
               | 
               | As opposed to today, where companies are doing everything
               | they can, stretching the truth, just so they can market
               | their tools as "Using AI."
        
               | huhlig wrote:
               | Where do you draw the line on things like Photoshop or
               | Premier where AI suffuses the entire product. Not
               | everything AI is generative AI.
        
               | munk-a wrote:
               | You can't use them - other tools that match most of the
               | functionality without including AI tools will emerge and
               | take over the market if this is an important thing to
               | people... alternatively Adobe wises up and rolls back AI
               | stuff or isolates it into consumer-level only things that
               | mark images as tainted.
        
               | ryandrake wrote:
               | This is a great point and I don't know. We are entering a
               | strange and seemingly totally untrustworthy world. I
               | wouldn't want to have to litigate all this.
        
               | xp84 wrote:
               | This is depressing, we're going to intentionally use
               | worse tools to avoid some idiotic scare label. Basically
               | the entire GMO or "artificial flavor" debates all over
               | again.
               | 
               | If you edit this image by hand you're good, but if you
               | use a tool that "uses AI" to do it, you need to put the
               | scare label on. Even if pixel-for-pixel both methods
               | output the identical image! Just as a GMO/not GMO has no
               | correlation to harmful compounds being in the food, and
               | artificial flavors are generally more pure than those
               | extracted from some wacky and more expensive means from a
               | "natural" item.
        
               | alwa wrote:
               | You may be interested in the Content Authenticity
               | Initiative's Content Credentials. The idea seems to be to
               | keep a more-or-less-tamperproof provenance of changes to
               | an image from the moment the light hits the camera's
               | sensor.
               | 
               | It sounds like the idea is to normalize the use of such
               | an attribution trail in the media industry, so that
               | eventually audiences could start to be suspicious of
               | images lacking attribution.
               | 
               | Adobe in particular seems to be interested in making
               | GenAI-enabled features of its tools automatically apply a
               | Content Credential indicating their use, and in making it
               | easier to keep the content attribution metadata than to
               | strip it out.
        
               | munk-a wrote:
               | This is a problem of provenance (as it's known in the art
               | world) and being certain of the provanence is a difficult
               | thing to do - it's like converting a cowboy coded C++
               | project to consistently using const... you need to dig
               | deep into every corner and prefer dependencies that obey
               | proper const usage. Doing that as an individual content
               | creator would be extremely daunting - but this isn't
               | about individuals. If Getty has a policy against AI and
               | guarantees no AI generation on their platform while
               | Shutterstock doesn't[1] then creators may end up
               | preferring Getty so that they can label their otherwise
               | AI free content as such on Youtube - maybe it gets
               | incorporated into the algorithm and gets them more views
               | - maybe it's just a moral thing... if there's market
               | pressure then the down-the-chain people will start
               | getting stricter and, especially if one of those
               | intermediary stock providers violates an agreement and
               | gets hit with a lawsuit, then we might see a more
               | concerted movement to crack down on AI generation.
               | 
               | At the end of the day it's going to be drenched in
               | contracts and obscure proofs of trust - i.e. some signing
               | cert you can attach to an image if it was generated on an
               | entirely controlled environment that prohibits known AI
               | generation techniques - that technical side is going to
               | be an arms race and I don't know if we can win it (which
               | may just result in small creators being bullied out of
               | the market)... but above the technical level I think
               | we've already got all the tools we need.
               | 
               | 1. These two examples are entirely fabricated
        
               | tehwebguy wrote:
               | The "Sponsored Content" tag on a channel should link to a
               | video of face / voice of the channel talking about what
               | sponsored content means in a way that's FTC compliant.
        
             | bgirard wrote:
             | > To be effective, warnings like this have to be MANDATED
             | on the item in question, and FORBIDDEN when not present.
             | 
             | I think for it to be effective you'd have to require them
             | to provide an itemized list of WHAT is AI generated.
             | Otherwise what if a content creator has a GenAI logo or
             | feature that's in every video and put a lazy disclaimer.
             | 
             | > (This post may have been generated by AI; this notice in
             | compliance with AI notification complications.)
             | 
             | :D
        
               | wolpoli wrote:
               | Yes, AI could have been used anywhere in the production
               | pipeline: AI could be in the script, could be used in the
               | stock photo or video, and more.
        
               | BHSPitMonkey wrote:
               | The same is true for an asset's licensing/royalty-free
               | status, which creators are surely aware of when pulling
               | these things in.
        
               | nomel wrote:
               | For something like YouTube, you could have the video's
               | progress bar be a different color for the AI sections.
               | Maybe three: real, unknown, AI. Without an "unknown" type
               | tag, you wouldn't be able to safely use clips.
        
             | schoen wrote:
             | The Prop 65 warnings are probably unhelpful even when
             | accurate because they don't show anything about the level
             | of risk or how typical or atypical it is for a given
             | context. (I'm thinking especially about warnings on
             | buildings more than on food products, although the same
             | problem exists to some degree for food.)
             | 
             | It's very possible that Prop 65 has motivated some
             | businesses to avoid using toxic chemicals, but it doesn't
             | often help individuals make effective health decisions.
        
               | dawnerd wrote:
               | Prop 65 is also way too broad. It needs to be specific
               | about what carcinogens you're being exposed to and not
               | just "it's a parking garage and this is our legally
               | mandated sign"
        
               | inferiorhuman wrote:
               | As of 2016 companies are required to list the specific
               | chemical and how to avoid or minimize exposure.
        
               | xp84 wrote:
               | Seems to still be pretty pointless considering that roads
               | and parking lots and garages are all to be avoided if you
               | want to avoid exposure... just stay away from any of
               | those
        
               | inferiorhuman wrote:
               | It's great for things you wouldn't expect. Like mercury
               | in fish, or lead and BPA in plastic.
        
               | dawnerd wrote:
               | I have yet to see any of that in practice. Guessing no
               | one is enforcing it.
        
               | inferiorhuman wrote:
               | There was a push to crack down on over labeling, but
               | manufacturers have pushed back quite a bit.
               | 
               | https://www.corporatecomplianceinsights.com/california-
               | warni...
        
               | katbyte wrote:
               | While you may think it didn't have an effect a recent
               | 99pi episode covered it and it sounds like it has
               | definitely motivated many companies to remove chemicals
               | from their products.
               | 
               | It's not perfect but it has had a positive effect
               | https://99percentinvisible.org/episode/warning-this-
               | podcast-...
        
               | MBCook wrote:
               | Beat me to it!
               | 
               | As a non-Californian I'm used to them from the little
               | stickers on seemingly every electronics cable that comes
               | with something I buy.
               | 
               | But from listening to that episode when it came out it
               | sounds like it really has helped a lot, even if it's also
               | become kind of obnoxious.
        
               | inferiorhuman wrote:
               | seemingly every electronics cable
               | 
               | If it's something you've bought recently the offending
               | ingredient should be listed. Otherwise, my money would be
               | on lead being used as a plasticizer. Either way at least
               | you have the tools to find out now.
        
               | pixl97 wrote:
               | But does it actually benefit the customer?
               | 
               | Like is it one of those things the remove a 1 in a
               | billion chance of cancer, and now have a product that
               | wears out twice as fast leading to a doubling of sales?
        
               | schoen wrote:
               | Thanks, that's an interesting overview.
        
               | ben_w wrote:
               | Indeed.
               | 
               | First time I was in CA, my then-partner's mother saw a
               | Prop 65 notice and asked why they couldn't just ban the
               | substances.
               | 
               | We were in a restaurant that served alcohol, one of the
               | known substances is... alcoholic beverages.
               | 
               | https://en.wikipedia.org/wiki/California_Proposition_65_l
               | ist...
               | 
               | Banning that didn't work out so well the last time.
        
               | Fatnino wrote:
               | The entire Stanford campus (which is much bigger than a
               | typical university) has a prop 65 warning at the
               | entrance.
               | 
               | 898 Bowdoin St https://maps.app.goo.gl/uHTTd7yYtAibAg1QA
               | 
               | Some of the street view passes the sign is washed out.
               | Click through to different times to see the sign.
        
             | aeternum wrote:
             | How much AI is enough to warrant it though. Like is human
             | motion-capture based content AI or human? How about
             | automatic touchup makeup? At what point does touch-up
             | become face swap?
        
             | jcalx wrote:
             | This will make AI the new sesame allergen [1] -- if you
             | aren't 100% certain every asset you use isn't AI-generated,
             | then it makes sense to stick some AI-generated content in
             | and label the video accordingly, out of compliance.
             | 
             | [1] https://www.npr.org/sections/health-
             | shots/2023/08/30/1196640...
        
               | xp84 wrote:
               | Wow. This is an awesome education on why you can't just
               | regulate the world into what you want it to be without
               | regard to feasibility. I'm sure the few who are allergic
               | are mad, but it would also be messed up to just ban all
               | "allergens" across the board - which is the only
               | effective and fair way to guarantee that this approach
               | couldn't ever be used to comply with these laws. There
               | isn't much out there that _somebody_ isn't allergic to or
               | intolerant of.
        
               | pixl97 wrote:
               | >would also be messed up to just ban all "allergens"
               | across the board -
               | 
               | Lol, this sounds like one of those fabels where an idiot
               | king bans all allergens then a week later everyone is
               | starving to death in the kingdom because it turns out
               | that in a large enough population there will be enough
               | different allergies that everything gets banned.
        
             | aspyct wrote:
             | Disagree. I will proudly write that my work is AI free.
        
             | winter_blue wrote:
             | I've found Prop 65 warnings to be useful. They're not
             | pervasively everywhere; but when I see a Prop 65 warning, I
             | consciously try to pick a product without it.
        
             | paulddraper wrote:
             | > To be effective, warnings like this have to be MANDATED
             | on the item in question, and FORBIDDEN when not present.
             | 
             | That already happens for foods.
             | 
             | The solution for suppliers is to intentionally add small
             | quantities of allergens (sesame). [1] By having that as an
             | actual ingredient, manufacturers don't have to worry about
             | whether or not there is cross contamination while
             | processing.
             | 
             | [1] https://www.medpagetoday.com/allergyimmunology/allergy/
             | 10652...
        
             | tehwebguy wrote:
             | I think the opposite will happen, non-AI content will be
             | "certified organic"
        
           | aodonnell2536 wrote:
           | This may be a good thing, as it could teach the public some
           | skills for identifying whether or not content has been AI
           | generated.
           | 
           | Eventually, it may be completely indiscernible, but we aren't
           | there yet
        
             | yaomingite wrote:
             | AI can already create photo-realistic images, and the old
             | "look at the hands" rule doesn't really work on images
             | generated with modern models.
             | 
             | There may be a few tells still, but those won't last long,
             | and the moment someone can find a new pattern you can make
             | that a negative prompt for new images to avoid repeating
             | the same mistake.
             | 
             | I think we are already there, and it seems like we aren't
             | because many people are using free low-quality models with
             | a low number of steps because its more accessible.
        
           | samstave wrote:
           | Or such construction/arch things such as "TITLE N"
           | compliance...
           | 
           | For any physical build there are typ "TITLE 25" such
           | disclosurs that are required for any new-build plans...
           | 
           | Maybe we have TITLE N as designed by AI discolsures that will
           | be needed...
        
           | lp0_on_fire wrote:
           | Am I the only one who is bothered by calling this phenomenon
           | "hallucinating"?
           | 
           | It's marketing-speak and corporate buzzwords to cover for the
           | fact that their LLMs often produced wrong information because
           | they aren't capable of understanding your request, nuance, or
           | the training data it used is wrong, or the model just plain
           | sucks.
           | 
           | Would we tolerate such doublespeak it were anything else?
           | "Well, you ordered a side of fries with your burger but
           | because our wait staff made a mistake...sorry, hallucinated,
           | they brought you a peanut butter sandwich that's growing mold
           | instead."
           | 
           | It gets more concerning when the stakes are raised. When LLMs
           | (inevitably) start getting used in more important contexts,
           | like healthcare. "I know your file says you're allergic to
           | penicillin and you repeated when talking to our ai-doctor but
           | it hallucinated that you weren't."
        
             | samatman wrote:
             | You're not the only one. I will continue to fight the
             | losing battle for "confabulation" for as long as the
             | problem remains current.
        
             | altairprime wrote:
             | Human beings regularly hallucinate details that aren't real
             | when asked to provide their memories of an event, and often
             | don't realize they're doing it at all. So whole AI
             | definitely is lacking in the "can assess fact versus
             | fiction" department, that's an overlapping problem with
             | "invents things that aren't actually real". It can, today,
             | hallucinate accurate and inaccurate information, but it
             | can't determine validity _at all_ , so it's sometimes wrong
             | even when _not_ hallucinating.
        
             | IshKebab wrote:
             | Nonsense. It isn't marketing speak to cover for anything.
             | It's a pretty good description of what is happening.
             | 
             | The reason models hallucinate is because we train them to
             | produce linguistically plausible output, which _usually_
             | overlaps well with factually correct output (because it
             | wouldn 't be plausible to say e.g. "Barack Obama is
             | white"). But when there isn't much data to show that
             | something that is totally made up is implausible then
             | there's no penalty to the model for it.
             | 
             | It's nothing to do with not being able to understand your
             | request, and it's rarely because the training data is
             | wrong.
        
               | dartos wrote:
               | "Hallucinate" is definitely marketing.
               | 
               | it translates to "Creates text which contains incorrect
               | or invalid information"
               | 
               | The latter just doesn't sound as good in
               | headlines/articles/tutorials (eg. marketing material).
        
               | ryandrake wrote:
               | We already have words for when a computer program
               | produces unexpected/incorrect output: "defect" and "bug"
        
               | dartos wrote:
               | The weird thing is, it's not a bug of software, it's a
               | limitation.
               | 
               | The software is working as designed, statistics are just
               | imperfect
        
               | ben_w wrote:
               | It's a term of art from the days of image recognition AI
               | that would confidently report seeing a giraffe while
               | looking at a picture of an ambulance.
               | 
               | It doesn't feel right to me either, to use it in the
               | context of generative AI, and I'd support renaming this
               | behaviour in GenAI (text and images both) -- though
               | myself I'd call this behaviour "mis-remembering".
               | 
               | Edit: apparently some have suggested "delusion". That
               | also works for me.
        
               | lucianbr wrote:
               | So if I replied to your comment with "you are incorrect"
               | I would be putting you in a worse light than saying "you
               | are hallucinating"? The second is making it sound better?
               | Doesn't feel that way to me.
        
               | JohnFen wrote:
               | My problem with "hallucination" isn't that it makes error
               | sound better or worse, it's that it makes it sound like
               | there's a consciousness involved when there isn't.
        
               | IshKebab wrote:
               | It's definitely not marketing. It has been in use for a
               | lot longer than LLMs existed.
        
               | dartos wrote:
               | Links?
               | 
               | Also those two statements are not mutually exclusive.
               | 
               | Errors in statistical models being called hallucinations
               | in the past does not mean that term is not marketing
               | speak for what I said earlier.
        
             | lucianbr wrote:
             | To me it sounds pretty damning. "The tool hallucinates"
             | makes me think it's completely out of touch with reality,
             | spouting nonsense. While "It has made a mistake, it is
             | factually incorrect" would apply to many of my comments if
             | taken very literally.
             | 
             | Webster definition: "a sensory perception (such as a visual
             | image or a sound) that occurs in the absence of an actual
             | external stimulus and usually arises from neurological
             | disturbance (such as that associated with delirium tremens,
             | schizophrenia, Parkinson's disease, or narcolepsy) or in
             | response to drugs (such as LSD or phencyclidine)".
             | 
             | I would fire with prejudice any marketing department that
             | associated our product with "delirium tremens,
             | schizophrenia, [...] LSD or phencyclidine".
        
             | ToValueFunfetti wrote:
             | I don't get this at all. "Hallucinate" to me only can mean
             | "produce false information". I've only ever seen it used
             | perjoratively re: AI, and I don't understand what it covers
             | up- how else are people interpreting it? I could see the
             | point if you were saying that it implies sentience that
             | isn't there, but your analogy to a restaurant implies
             | that's not what you're getting at.
        
             | bcrosby95 wrote:
             | > Would we tolerate such doublespeak it were anything else?
             | 
             | Yes: identity theft. My identity wasn't "stolen", what
             | really happened was a company gave a bad loan.
             | 
             | But calling it identity theft shifts the blame. Now it's my
             | job to keep my data "safe", not their job to make sure
             | they're giving the right person the loan.
        
             | oaktowner wrote:
             | I can't stand it being called "hallucinating" because it
             | anthropomorphizes the technology. This isn't a conciousness
             | that is "seeing" things that don't exist: it's a word
             | generator that is generating words that don't make sense
             | (not in a syntactic sense, but in a semantic sense).
             | 
             | Calling it "hallucination" implies that there are (other)
             | moments when it is understanding the world correctly -- and
             | that itself is not true. At those moments, it is a word
             | generator that is generating words that DO make sense.
             | 
             | At no point is this a conciousness, and anthropomorphizing
             | it gives the impression that it is one.
        
               | JohnFen wrote:
               | This. It's not "hallucination", it's "error".
        
               | krapp wrote:
               | It isn't an error, either. It's doing exactly what it's
               | intended to, exactly as it's intended to do it. The error
               | is in the human assumption that the ability to construct
               | syntactically coherent language signals self-awareness or
               | sentience. That it _should_ be capable of understanding
               | the semantics correctly, because humans obviously can.
               | 
               | There really is no correct word to describe what's
               | happening, because LLMs are effectively philosophical
               | zombies. We have no metaphors for an entity that can
               | appear to hold a coherent conversation, do useful work
               | and respond to commands but _not think._ All we have is
               | metaphors from human behavior which presume the
               | connection between language and intellect, because that
               | 's all we know. Unfortunately we also have nearly a
               | century of pop culture telling us "AI" is like Data from
               | Star Trek, perfectly logical, superintelligent and always
               | correct.
               | 
               | And "hallucination" is good enough. It gets the point
               | across, that these things can't be trusted.
               | "Confabulation" would be better, but fewer people know
               | it, and it's more important to communicate the
               | untrustworthy nature of LLMs to the masses than it is to
               | be technically precise.
        
               | JohnFen wrote:
               | > It isn't an error, either. It's doing exactly what it's
               | intended to, exactly as it's intended to do it.
               | 
               | If the output is incorrect, that's error. It may not be a
               | bug, but it is still error.
        
             | programjames wrote:
             | I think people are much more conservative with their health
             | than text generation. If the text looks funky, you can just
             | try regenerating it, or write it yourself and have only
             | lost a few minutes. If your health starts looking funky,
             | you're kind of screwed.
        
           | makeitdouble wrote:
           | You put prop 65 as backfiring, but it looks to me like the
           | original intent was reducing toxic products in tap water for
           | instance and it largely achieved that goal.
           | 
           | From there warnings proliferated on so many more products,
           | but getting told that chocolate bars can cause cancer is
           | still a reasonable tradeoff. Especially as nothing is
           | stopping the law from getting tweaked from there.
           | 
           | Comparing it to prop 65 or GDPR makes it look like a probably
           | deeply effective, yes slightly annoying rule...I sure hope
           | that's what we end up with.
        
           | Aerroon wrote:
           | I think the main way where the line will move is what is
           | considered "realistic" and what is "animation".
           | 
           | A lot of early stable diffusion seemed "realistic" but
           | comparing them to newer stuff makes them stand out at
           | obviously AI generated and unrealistic.
        
           | aiauthoritydev2 wrote:
           | Yes. Nearly all EU regulations are going to end up like that.
           | Over-regulate and people develop blindness to regulations.
           | Our best hope right now is that EU becomes more and more
           | irrelevant as the gap between US and EU grows to the point
           | American companies can simply bankroll the EU leaders.
        
           | JohnFen wrote:
           | > You'll see notices everywhere because increasingly AI will
           | be used in all parts of the content creation pipeline.
           | 
           | Which would be OK with me, personally. Right now, those
           | cookie banners do serve a valuable function for me -- when I
           | see them, I know to treat the site with caution and
           | skepticism. If AI warnings end up similar, they too will
           | serve a similar purpose. It's all better than nothing.
        
             | TurningCanadian wrote:
             | I like sites whose cookie banner gives options instead of
             | only having "Accept All". It makes you feel more respected
             | as a user.
        
           | renegade-otter wrote:
           | Cookie banners are not required even by EU laws. It's a
           | stupid trend everyone is copying.
        
             | hnbad wrote:
             | That's technically correct but not entirely true.
             | 
             | The ePrivacy directive and GDPR don't literally require
             | cookie banners but the former requires disclosure of
             | specific information and the latter requires consent for
             | most forms of data collection and processing. Even the 2002
             | directive actually require an option to refuse cookies
             | which many cookie banners still fail to implement properly
             | post-GDPR.
             | 
             | The problem is that most websites want to start collecting,
             | tracking and processing data that requires consent before
             | any interaction takes place that would allow for a
             | contextual opt-in. This means they have to get that consent
             | somehow and the "cookie banner" or consent dialog serves
             | that purpose.
             | 
             | Of course many (especially American) implementations get
             | this hilariously wrong by a) collecting and processing data
             | even before consent is established, b) not making opt-out
             | as trivial as opt-in despite the ePrivacy directive
             | explicitly requiring this (e.g. hiding "refuse" behind a
             | "more info" button or not giving it the same weight as
             | "accept all"), c) not actually specifying the details on
             | what data is collected etc to the level required by the
             | directive, d) not providing any way to revise/change the
             | selections (especially withdrawing consent previously
             | given) and e) trying to trick users with a manual opt-out
             | checkbox per advertiser/service labeled "legitimate
             | interest" which is an _alternative_ to consent and thus is
             | not something you can opt out of because it does not
             | require consent (but of course in these cases the use never
             | actually qualifies as  "legitimate interest" to begin with
             | and the opt-out is a poorly constructed CYA).
             | 
             | In a different world, consent dialogs could work entirely
             | like mobile app permissions: if you haven't given consent
             | for something you'll be prompted when it becomes relevant.
             | But apparently most sites bank on users pressing "accept
             | all" to get rid of the annoying banner - although of course
             | legally they probably don't even have data to determine if
             | this gamble works for them because most analytics requires
             | consent (i.e. your analytics will show a near 100%
             | acceptance rate because you only see the data of users who
             | opted into analytics and they likely just pressed "accept
             | all").
        
         | ysofunny wrote:
         | I have a more entertaining: "typical google, getting somebody
         | else to give them training data in exchnage for free hosting of
         | some sort"
        
         | orbital-decay wrote:
         | Labeling AI-generated content (assuming it works) is beneficial
         | for Google, as they can avoid some dataset contamination.
        
           | airspresso wrote:
           | Excellent point. With more and more AI-generated content it
           | will be key to be able to tell it apart from the human-
           | generated content.
        
         | alphazard wrote:
         | Is anyone else worried about how naive this policy is?
         | 
         | The solution here is for important institutions to get onboard
         | with the public key infrastructure, and start signing anything
         | they want to certify as authentic.
         | 
         | The culture needs to shift from assuming video and pictures are
         | real, to assuming they are made the easiest way possible. A
         | signature means the signer wants you to know the content is
         | theirs, nothing else.
         | 
         | It doesn't help to train people to live in a pretend world
         | where fake content always has a warning sticker.
        
           | RandallBrown wrote:
           | One of Neal Stephenson's more recent novels deals with this
           | concept. Fake news becomes so bad that everyone starts
           | singing everything they create.
        
             | pixl97 wrote:
             | This is about as realistic as the next generation of
             | congress people ending up 40 years younger.
             | 
             | We literally have politicians talking about pouring acid on
             | hardware and expect these same bumbleheads to keep their
             | signing keys safe at the same time. The average person is
             | far too technologically illiterate to do that. Next time
             | you go to grandmas house you'll learn she traded her
             | signing key for chocolate chip cookies.
        
           | thwarted wrote:
           | I see a lot of confusing authenticity with accuracy. Someone
           | can sign the statement "Obama is white" but that doesn't make
           | it a true statement. The use of PKI as part of showing
           | provenance/chain of trust doesn't make any claims about the
           | accuracy of what is signed. All it does is assert that a
           | given identity signed something.
        
             | airspresso wrote:
             | It's not about what is being signed, it's about who signed
             | it and whether you trust that source. I want credible news
             | outlets to start signing their content with a key I can
             | verify as theirs. In that future all unsigned content is by
             | definition fishy. PKI is the only way to implement trust in
             | a digital realm.
        
           | The_Colonel wrote:
           | > The culture needs to shift from assuming video and pictures
           | are real, to assuming they are made the easiest way possible.
           | 
           | That sounds like a dystopia, but I guess we're going into
           | that direction. I expect that a lot of fringe groups like
           | flat-earthers, lizard people conspiracy, war in Ukraine is
           | fake, will become way more mainstream.
        
         | Karellen wrote:
         | What if a real person reads a script that was created with an
         | LLM? Does that count? Should it?
        
           | airspresso wrote:
           | Blog post specifically mentions that using AI to help writing
           | the script does not require labeling the video.
        
             | Karellen wrote:
             | Sorry, I wasn't entirely clear that I was specifically
             | responding to the GP comment referencing the EU AI act (as
             | opposed to creating a new top-level comment responding to
             | the original blog post and Google's specific policy) which
             | pointed out:
             | 
             | > Besides, AI-generated text published with the purpose to
             | inform the public on matters of public interest must be
             | labelled as artificially generated. This also applies to
             | audio and video content constituting deep fakes
             | 
             | Clearly "AI-generated text" doesn't apply to YouTube
             | videos.
             | 
             | But, it is interesting that if you use an LLM to generate
             | text and present that text to users, you need to inform
             | them it was AI-generated (per the act). But if a real
             | person reads it out, apparently you don't (per the policy)?
             | 
             | This seems like a weird distinction to me. Should the
             | audience be informed if a series of words were LLM-
             | generated or not? If so, why does it matter if they're
             | delivered as text, or if they're read out?
        
         | pier25 wrote:
         | Thank you EU!
        
         | cmilton wrote:
         | I would take this a step further and make it required that
         | companies create an easy way for users to opt-out of this type
         | of content.
        
         | hnbad wrote:
         | Usually when a big corporation gleefully announces a change
         | like this it's worth checking whether there's any regulations
         | on that topic taking effect in the near future.
         | 
         | On a local level, I recall how various brands started making a
         | big deal of replacing disposable plastic bags with canvas or
         | paper alternatives "for the environment" just coincidentally a
         | few months before disposable plastic bags were banned in the
         | entire country.
        
       | summerlight wrote:
       | Looks like there is a huge grea area that they need to figure out
       | in practice. From
       | https://support.google.com/youtube/answer/14328491#:
       | 
       | Examples of content creators don't have to disclose:
       | * Someone riding a unicorn through a fantastical world       *
       | Green screen used to depict someone floating in space       *
       | Color adjustment or lighting filters       * Special effects
       | filters, like adding background blur or vintage effects       *
       | Production assistance, like using generative AI tools to create
       | or improve a video outline, script, thumbnail, title, or
       | infographic       * Caption creation       * Video sharpening,
       | upscaling or repair and voice or audio repair       * Idea
       | generation
       | 
       | Examples of content creators need to disclose:                 *
       | Synthetically generating music (including music generated using
       | Creator Music)       * Voice cloning someone else's voice to use
       | it for voiceover       * Synthetically generating extra footage
       | of a real place, like a video of a surfer in Maui for a
       | promotional travel video       * Synthetically generating a
       | realistic video of a match between two real professional tennis
       | players       * Making it appear as if someone gave advice that
       | they did not actually give       * Digitally altering audio to
       | make it sound as if a popular singer missed a note in their live
       | performance       * Showing a realistic depiction of a tornado or
       | other weather events moving toward a real city that didn't
       | actually happen       * Making it appear as if hospital workers
       | turned away sick or wounded patients       * Depicting a public
       | figure stealing something they did not steal, or admitting to
       | stealing something when they did not make that admission       *
       | Making it look like a real person has been arrested or imprisoned
        
         | Aardwolf wrote:
         | > Synthetically generating music (including music generated
         | using Creator Music)
         | 
         | What about music made with a synthesizer?
        
           | Jensson wrote:
           | If you manually did enough work have the copyright it is
           | fine.
           | 
           | But since AI can't legally have copyright to their music
           | Google probably wants to know for that reason.
        
             | Aardwolf wrote:
             | I hope there is some kind of middle ground, legally, here?
             | Like say you use a piano that uses AI to generate
             | artificial piano sounds, but you create and play the melody
             | yourself: can you get copyright or not?
        
               | jprete wrote:
               | IANAL. I think you'd get copyright on the melody and the
               | recording, but not the sound font that the AI created.
        
             | Gormo wrote:
             | It goes without saying that a piece of software can't be a
             | copyright holder.
             | 
             | But the person who uses that software certainly can own the
             | copyright to the resulting work.
        
               | Jensson wrote:
               | If someone else uses the same AI generator software and
               | makes the same piece of music should Google go after them
               | for it? I don't think that would hold in court.
               | 
               | Hopefully this means that AI generated music gets skipped
               | by Googles DRM checks.
        
             | dragonwriter wrote:
             | > If you manually did enough work have the copyright it is
             | fine.
             | 
             | Amount of work is not a basis for copyright. (Kind of work
             | is, though the basis for the "kind" distinction used isn't
             | actually a real objective category, so its ultimately
             | almost entirely arbitary.)
        
               | anigbrowl wrote:
               | That could get tricky. A lot of hardware and software
               | MIDI sequencers these days have probabilistic triggering
               | built in, to introduce variation in drum loops,
               | basslines, and so forth. An argument could be made that
               | even if you programmed the sequence and all the sounds
               | yourself, having any randomization or algorithmic
               | elements would make the resulting work ineligible for
               | copyright.
        
           | GuB-42 wrote:
           | Even if it is fully AI-generated, this requirement seems off
           | compared to the other ones.
           | 
           | In all of the other cases, it can be deceiving, but what is
           | deceiving in synthetic music? There may be some cases where
           | it is relevant, like when imitating the voice of a famous
           | singer, but other than that, music is not "real", it is work
           | coming from the imagination of its creator. That kind of
           | thing is already dealt with with copyright, and attribution
           | is a common requirement, and one that YouTube already
           | enforces (how it does that is different matter).
        
             | slowfox wrote:
             | From a Google/Alphabet perspective it could also be
             | valuable to distinguish between ,,original" and ,,ai
             | generated" music for the purpose of a cleaner database to
             | train their own music generation models?
        
           | zuminator wrote:
           | In one of the examples, they refer to something called "Dream
           | Track"
           | 
           | > _Dream Track in Shorts is an experimental song creation
           | tool that allows creators to create a unique 30-second
           | soundtrack with the voices of opted-in artists. It brings
           | together the expertise of Google DeepMind and YouTube's most
           | innovative researchers with the expertise of our music
           | industry partners, to open up new ways for creators on Shorts
           | to create and engage with artists._
           | 
           | > _Once a soundtrack is published, anyone can use the AI-
           | generated soundtrack as-is to remix it into their own Shorts.
           | These AI-generated soundtracks will have a text label
           | indicating that they were created with Dream Track. We're
           | starting with a limited set of creators in the United States
           | and opted-in artists. Based on the feedback from these
           | experiments, we hope to expand this._
           | 
           | So my impression is they're talking about labeling music
           | which is derived from a real source (like a singer or a band)
           | and might conceivably be mistaken for coming from that
           | source.
        
         | AnthonyMouse wrote:
         | These rules have really nothing to do with AI. They're trying
         | to impose a ban on deceit.
        
           | ajross wrote:
           | That's like saying speed limit signs have really nothing to
           | do with cars, they're trying to impose a ban on collision
           | velocity. Which is true, but only speciously, as the rule
           | exists only because motor vehicles made it so easy to go
           | fast.
        
             | AnthonyMouse wrote:
             | They don't have anything specifically to do with cars. They
             | apply equally to motorcycles, trucks and anything else that
             | could go that fast. Get pulled over in a tank or a
             | hovercraft and try to tell the officer that you can't have
             | been speeding because it isn't a car.
             | 
             | Should we like deepfakes any better if they're created by a
             | nation state using pre-AI Hollywood production technology,
             | or "by hand" with Photoshop etc.? If 3D printers get better
             | so that anybody can 3D print masks you can wear to
             | convincingly look like someone else and then record
             | yourself on camera, would you expect a different set of
             | rules for that or are we talking about the same kind of
             | problem?
        
               | ajross wrote:
               | You missed the analogy, so I'll spell it out: before we
               | had cars[1], we couldn't go fast on roads, and there were
               | no speed limit signs. Before we had AI, we couldn't
               | deceive people with easy fakes, and so there was no need
               | to regulate it. Now we do, and there is, and YouTube did.
               | 
               | Trying to characterize this as not related to AI just
               | isn't adding to the discussion. Clearly it is a response
               | to the emergence of AI fakes.
               | 
               | [1] And all the other stuff you list
        
               | AnthonyMouse wrote:
               | Trying to shovel in "and all of the other stuff" breaks
               | the analogy though. Misinformation isn't _new_. Image gen
               | is hardly the first time you could create a fictional
               | depiction of something. It 's not even the first time you
               | could do it with commonly available tools. It's just the
               | moral panic du jour.
               | 
               | YouTube did this because the EU passed a law about it.
               | The EU passed a law about it because of the moral panic,
               | not because the abstract concept of deception was only
               | recently invented.
               | 
               | It's like having cars already, and speed limits, and then
               | someone invents an electric car that can accelerate from
               | 0 to 200 MPH in 5 seconds, so the government passes a new
               | law with some arbitrary registration requirements to
               | satisfy Something Must Be Done.
        
               | anigbrowl wrote:
               | Pedantic but, I hope, amusing:
               | https://www.smithsonianmag.com/smart-news/when-president-
               | uly...
        
           | speff wrote:
           | From your comment's tone, it seems like this is supposed to
           | be a bad thing. The only people who would be upset about this
           | are folks who are trying to pass generated content off as
           | real. I'm sorry if I don't have much sympathy for them.
        
             | AnthonyMouse wrote:
             | "You have to tell people if you're lying" isn't a stupid
             | rule because lying is good, it's a stupid rule because
             | liars can lie about lying and proving it was the original
             | problem.
        
               | speff wrote:
               | The problem is that there wasn't a rule that could be
               | used to take a misleading video down. Now there is.
               | 
               | Finding out a video is maliciously fabricated is a
               | different problem.
        
               | AnthonyMouse wrote:
               | > The problem is that there wasn't a rule that could be
               | used to take a misleading video down.
               | 
               | Of course there was. The community guidelines have
               | prohibited impersonation and misinformation for years.
        
               | speff wrote:
               | From the report button on a youtube video: Misinformation
               | - Content that is misleading or deceptive with serious
               | risk of egregious harm.
               | 
               | That sounds like a quagmire of subjectivity to enforce.
               | You can argue whether generated content was created to
               | mislead or whether it would cause _egregious_ harm.
               | 
               | Now there's no more arguing. Is it generated or is it
               | real - and is it marked if generated? I still fail to see
               | the downside here.
        
               | AnthonyMouse wrote:
               | You're acting like this is a court with a judge. The
               | company makes up subjective rules and then subjectively
               | enforces them. There was never any arguing to begin with,
               | they just ban you if they don't like you, or at random
               | for no apparent reason, and you have no recourse.
               | 
               | > Now there's no more arguing. Is it generated or is it
               | real - and is it marked if generated? I still fail to see
               | the downside here.
               | 
               | How is this supposed to lead to _less_ arguing? If there
               | was an easy way to tell if something is AI-generated then
               | you wouldn 't need the user to tag it. When there isn't,
               | now you have to argue about whether it is or not -- or if
               | it _obviously_ is, whether it then has to be tagged,
               | because it obviously is and they 've given that as an
               | exception.
        
               | speff wrote:
               | Youtube isn't some mom n' pop operation - they do have a
               | review process and make an attempt at following the rules
               | they set. They can ban you for any reason...but they
               | generally don't unless you're clearly breaking a rule.
               | 
               | I'm not getting into the rabbit hole of finding out if it
               | was actually generated - once again, that's a different
               | problem. One that I already mentioned 2 comments back. My
               | point was that there is less subjectivity with this rule.
               | If the content is found to have been generated and isn't
               | marked, then there are clear grounds to remove the video.
               | 
               | What youtube does when there is doubt is not known yet. I
               | don't deal in "well this _could_ lead to this".
        
               | AnthonyMouse wrote:
               | > Youtube isn't some mom n' pop operation - they do have
               | a review process and make an attempt at following the
               | rules they set. They can ban you for any reason...but
               | they generally don't unless you're clearly breaking a
               | rule.
               | 
               | They use (with some irony) AI and other algorithms to
               | determine if you're breaking the rules, often leading to
               | arbitrary or nonsensical results, which the review
               | process frequently fails to address.
               | 
               | > I'm not getting into the rabbit hole of finding out if
               | it was actually generated - once again, that's a
               | different problem.
               | 
               | It isn't a different problem, it's _the_ problem. You
               | want people to label things because otherwise you 're not
               | sure, but because of that problem exactly, you have no
               | way of reliably or objectively enforcing the labeling
               | requirement. And you specifically have no way to do it in
               | the cases where it most matters because it's hard to
               | tell.
        
               | pixl97 wrote:
               | >Youtube isn't some mom n' pop operation
               | 
               | At a mom and pop you could at least talk to a person and
               | figure out what happen.
               | 
               | >I don't deal in "well this _could_ lead to this"
               | 
               | Did you not learn from the entire DMCA thing? Remember
               | the thing where piles tech people warned "Wow, this is
               | going to be used as a weapon to cause problems" and then
               | it was used as a weapon to cause problems.
               | 
               | Well, welcome to the next weapon that is going to be used
               | to cause problems.
        
               | speff wrote:
               | The DMCA implementation implemented is the only thing
               | that saved youtube from getting sued out of existence.
               | And people on the internet don't know what fair-use
               | actually means, so they complain/exaggerate about DMCA
               | takedowns when, surprise, it wasn't actually covered by
               | fair-use.
               | 
               | There's a handful of cases where yt actually messed up w/
               | DMCA and considering the sheer volume of videos they
               | process, I'd say it's actually pretty damn good.
               | 
               | So no, DMCA is not a valid reason to assume youtube will
               | handle this improperly.
        
               | lucianbr wrote:
               | > they just ban you if they don't like you, or at random
               | for no apparent reason, and you have no recourse.
               | 
               | The ban recourse problem is the opposite.
               | 
               | This is the "keep recourse": "this video is obviously
               | bad, but google doesn't feel like taking it down, and
               | there is nothing I can do about it". Now there is, and it
               | can actually go to a court with a judge in the end, if
               | Google is obstinate.
               | 
               | You didn't have a right to be hosted on Google before,
               | and you don't have now. Of course they can ban you as
               | they like. The thing is, they can't host you as they
               | like, if you're breaking this rule.
        
               | AnthonyMouse wrote:
               | > The thing is, they can't host you as they like, if
               | you're breaking this rule.
               | 
               | Except that the rule can be satisfied just by labeling
               | it, and if there are penalties for not labeling but no
               | penalties for labeling then the obvious incentive is to
               | stick the label on everything just in case, causing it to
               | become meaningless.
               | 
               | To prevent that would require prohibiting the label from
               | being applied to things that aren't AI-generated, which
               | is impracticable because now you need 100% accuracy and
               | there is no way to err on the side of caution, but nobody
               | has 100% accuracy. So then the solution would be to
               | _actually_ make everything AI-generated, e.g. by
               | systematically running it through some subtle AI filter,
               | and then you can get back to labeling everything to avoid
               | liability.
        
               | poszlem wrote:
               | Of course, YouTube is well-known for its methodical
               | approach to video removal, strictly adhering to
               | transparent guidelines, rather than deciding based on the
               | "computer says no" principle.
        
               | pixl97 wrote:
               | Knock knock
               | 
               | Who's there
               | 
               | dmcAI takedown notice!
        
               | frumper wrote:
               | It sounds like it could also be used to take down a video
               | someone thinks is fake. Proving it may be easy in some
               | cases, but in other's it may be quite difficult.
        
               | mortenjorck wrote:
               | Yes. This is a legislative implementation of the Evil
               | Bit.
        
               | anigbrowl wrote:
               | They can, but it's often possible to prove it later. If
               | you have a rule against lying and it's retroactively
               | discovered to have been broken, then you already have the
               | enforcement mechanism in place.
               | 
               | Really, your argument can be generalized to 'why have
               | laws at all, because people will break them and lie about
               | it'.
        
               | AnthonyMouse wrote:
               | > They can, but it's often possible to prove it later. If
               | you have a rule against lying and it's retroactively
               | discovered to have been broken, then you already have the
               | enforcement mechanism in place.
               | 
               | It isn't a rule against lying, it's a rule requiring lies
               | to be labeled. From which you get nothing useful that you
               | couldn't get from a rule against lying, because you'd
               | need the same proof for either one.
               | 
               | Meanwhile it becomes a trap for the unwary because
               | innocent people who don't understand the complicated
               | labeling rules get stomped by the system without
               | intending any malice.
               | 
               | > Really, your argument can be generalized to 'why have
               | laws at all, because people will break them and lie about
               | it'.
               | 
               | The generalization is that laws against not disclosing
               | crimes are pointless because the penalty for the crime is
               | already at least as severe as the penalty for not
               | disclosing it and you'd need to prove the crime to prove
               | the omission. This is, for example, why it makes sense to
               | have a right against self-incrimination.
        
               | umanwizard wrote:
               | There are already various situations where lying is
               | banned: depending on the circumstances, lying might count
               | as perjury, fraud, false advertising, etc. It seems silly
               | to suggest that these laws serve no purpose.
        
               | kimixa wrote:
               | The same reason you have to check a box saying you're not
               | a terrorist when entering the USA. It gives them a legal
               | basis to actually _do_ something about it when found out
               | from other means.
        
               | jameshart wrote:
               | Also, because _telling fictional stories_ has always been
               | one of the most common applications of video technology.
        
             | nonrandomstring wrote:
             | Won't it be equally problematic for creators, particularly
             | reporters/journalists, whose real content is misidentified
             | as fake?
             | 
             | This divide is obviously going to play out on two sides.
             | 
             | Proving authenticity may turn out to be as difficult as
             | proving fakeness. People will use this maliciously to flag
             | and censor content they dislike.
        
           | Gormo wrote:
           | Deceit usually implies some sort of intent to defraud. I'm
           | not sure using software to generate background music for a
           | video fits that description.
        
             | jachee wrote:
             | RIAA has entered the chat.
             | 
             | "It defrauds us of our hard-earned middle-man cut."
        
           | mock-possum wrote:
           | Requiring labeling / disclosure is not the same as banning.
        
             | jahewson wrote:
             | In this case it is, because labelled deceit is not
             | deceptive.
        
           | ehsankia wrote:
           | It's a label, not a ban. Just like sponsored content is not
           | banned, but must be disclosed.
        
         | dheera wrote:
         | > * Showing a realistic depiction of a tornado or other weather
         | events moving toward a real city that didn't actually happen
         | 
         | > * Making it appear as if hospital workers turned away sick or
         | wounded patients
         | 
         | > * Depicting a public figure stealing something they did not
         | steal, or admitting to stealing something when they did not
         | make that admission
         | 
         | Considering they own the platform, why not just ban this type
         | of content? It was possible to create this content before "AI".
        
           | dotnet00 wrote:
           | There are many cases where such content is perfectly fine.
           | After all, YouTube doesn't claim to be a place devoted to
           | non-fiction only. The first one is an especially common thing
           | in fiction.
        
             | ipaddr wrote:
             | Does that mean movies clips will need to be labeled?
        
           | samatman wrote:
           | The third one could easily be satire. Imagine that a
           | politician is accused of stealing from the public purse, and
           | issues a meme-worthy press statement denying it, and someone
           | generates AI content of that politician claiming not to have
           | stolen a car or something using a similar script.
           | 
           | Valid satire, fair use of the original content: parody is
           | considered transformative. But it should be labeled as AI
           | generated, or it's going to escape onto social media and
           | cause havoc.
           | 
           | It might anyway, obviously. But that isn't a good reason to
           | ban free expression here imho.
        
         | nashashmi wrote:
         | Those voice overs on tiktok that are computer generated but
         | sound quite real and often are reading some script. Do they
         | have to disclose that those voices are artificially produced?
        
         | kazinator wrote:
         | > _Synthetically generating music_
         | 
         | Yagoddabekidding. That could cover any piece of music created
         | with MIDI sequencing and synthesizers and such.
        
       | jjcm wrote:
       | I think it's smart to start trying things here. This has infinite
       | flaws with it, but from a business and learnings standpoint it's
       | a step toward the right direction. Over time we're going to both
       | learn and decide what is and isn't important to designate as "AI"
       | - Google's approach here at least breaks this into rules of what
       | "AI" things are important to label:
       | 
       | * Makes a real person appear to say or do something they didn't
       | say or do
       | 
       | * Alters footage of a real event or place
       | 
       | * Generates a realistic-looking scene that didn't actually occur
       | 
       | At the very least this will test each of these hypotheses, which
       | we'll learn from and iterate on. I am curious to see the legal
       | arguments that will inevitably kick up from each of these - is
       | color correction altering footage of a real event or place? They
       | explicitly say it isn't in the wider description, but what about
       | beauty filters? If I have 16 video angles, and use photogrammetry
       | / gaussian splatting / AI to generate a 17th, is that a
       | realistic-looking scene that didn't actually occur? Do I need to
       | have actually captured the photons themselves if I can be 99%
       | sure my predictions of them are accurate?
       | 
       | So many flaws, but all early steps have flaws. At least it is a
       | step.
        
         | jjcm wrote:
         | One black hat thing I'm curious about though is whether or not
         | this tag can be weaponized. If I upload a real event and tag it
         | as AI, will it reduce user trust that the real event ever
         | happened?
        
           | AnthonyMouse wrote:
           | The AI tags are fundamentally useless. The premise is that it
           | would prevent someone from misleading you by thinking that
           | something happened when it didn't, but someone who wants to
           | do that would just not tag it then.
           | 
           | Which is where the real abuse comes in: _You_ post footage of
           | a real event and _they_ say it was AI, and ban you for it
           | etc., because what actually happened is politically
           | inconvenient.
           | 
           | And the only way to prevent that would be a reliable way to
           | detect AI-generated content which, if it existed, would
           | obviate any need to tag anything because then it could be
           | automated.
        
             | anigbrowl wrote:
             | Not convinced by this. Camera sensors have measurable
             | individual noise, if you record RAW that won't be fakeable
             | without prior access to the device. You'd have a
             | straightforward case for defamation if your real footage
             | were falsely labeled, and it would be easy to demonstrate
             | in court.
        
               | AnthonyMouse wrote:
               | > Camera sensors have measurable individual noise, if you
               | record RAW that won't be fakeable without prior access to
               | the device.
               | 
               | Which doesn't help you unless non-AI images are all
               | required to be RAW. Moreover, someone who is trying to
               | fabricate something could obviously obtain access to a
               | real camera to emulate.
               | 
               | > You'd have a straightforward case for defamation if
               | your real footage were falsely labeled, and it would be
               | easy to demonstrate in court.
               | 
               | Defamation typically requires you to prove that the
               | person making the claim knew it was false. They'll, of
               | course, claim that they thought it was actually fake.
               | Also, most people don't have the resources to sue YouTube
               | for their screw ups.
        
               | Gregaros wrote:
               | DMCA abuse begs to differ.
        
               | VelesDude wrote:
               | Unfortunately video codecs love to crush that fine
               | detail.
        
               | nomel wrote:
               | Most consumer cameras require access menus to enable raw
               | because dealing with RAW is a truly terrible user
               | experience. The vast majority of image/video sensors out
               | there don't even support raw recordings, out of the box.
        
             | MBCook wrote:
             | That's what I was thinking. Why don't we just ask all scam
             | videos to label themselves as scams while we're at it?
             | 
             | It's nice honest users will do that but they're not really
             | the problem are they.
        
             | mazlix wrote:
             | I think you have a bit backwards. If you want to publish
             | pixels on a screen there should be no assumption that they
             | represent real events.
             | 
             | If you want to publish proof of an event, you should have
             | some pixels on a screen along with some cryptographic
             | signature from a device sensor that would necessitate
             | atleast a big corporation like Nikon / Sony / etc. being
             | "in on it" to fake.
             | 
             | Also since no one likes RAW footage it should probably just
             | be you post your edited version which may have "AI"
             | upscaling / de-noising / motion blur fixing etc, AND you
             | can post a link to your cryptographically signed verifiable
             | RAW footage.
             | 
             | Of course there's still ways around that like your footage
             | could just be a camera being pointed at an 8k screen or
             | something but at least you make some serious hurdles and
             | have a reasonable argument to the video being a result of
             | photons bouncing off real objects hitting your camera
             | sensor.
        
               | AnthonyMouse wrote:
               | > If you want to publish proof of an event, you should
               | have some pixels on a screen along with some
               | cryptographic signature from a device sensor that would
               | necessitate atleast a big corporation like Nikon / Sony /
               | etc. being "in on it" to fake.
               | 
               | At which point nobody could verify anything that happened
               | with any existing camera, including all past events as of
               | today and all future events captured with any existing
               | camera.
               | 
               | Then someone will publish a way to extract the key from
               | some new camera model, both allowing anyone to forge
               | anything by extracting a key and using it to sign
               | whatever they want, and calling into question everything
               | actually taken with that camera model/manufacturer.
               | 
               | Meanwhile cheap cameras will continue to be made that
               | don't even support RAW, and people will capture real
               | events with them because they were in hand when the
               | events unexpectedly happened. Which is the most important
               | use case because footage taken by a staff photographer at
               | a large media company with a professional camera can
               | already be authenticated by a big corporation,
               | specifically the large media company.
        
               | robertlagrant wrote:
               | I think at minimum YouTube could tag existing footage
               | uploaded before 2015 as very unlikely to be AI generated.
        
               | miki123211 wrote:
               | also the three letter agencies (not just from the US)
               | will have access to private keys of at least some
               | manufacturers, allowing them to authenticate fake events
               | and sow chaos by strategically leaking keys for cameras
               | that recorded something they really don't like.
        
               | miki123211 wrote:
               | > that would necessitate atleast a big corporation like
               | Nikon / Sony etc. being "in on it" to fake
               | 
               | Or an APT (AKA advanced persistent teenager) with their
               | parents camera and more time than they know what to do
               | with.
        
               | teaearlgraycold wrote:
               | I worked in device attestation at Android. It's not
               | robust enough to put our understanding of reality in.
               | Fine for preventing API abuse but that's it.
        
             | kube-system wrote:
             | > The premise is that it would prevent someone from
             | misleading you by thinking that something happened when it
             | didn't, but someone who wants to do that would just not tag
             | it then.
             | 
             | And when they do that, the video is now against Google's
             | policy and can be removed. That's the point of this policy.
        
           | sangnoir wrote:
           | I suspect we're headed into a world of attestation via
           | cryptographically signed videos. If you're the sole witness,
           | then you can reduce the trust in the event, however, if it's
           | a major event, then we can fall back on existing news-
           | gathering machinery to validate and counter your false
           | tagging (e.g. if a BBC camera captured the event, or there is
           | some other corroboration & fact checking).
        
           | JohnFen wrote:
           | I fear that we're barrelling fast toward a future when nobody
           | can trust anything at all anymore, label or not.
        
         | 4ndrewl wrote:
         | It's to comply with the EU AI regulatory framework. This step
         | is just additional cost they wouldn't have voluntarily burdened
         | themselves with.
        
         | okdood64 wrote:
         | I know people on HN love to hate on Google, but at least
         | they're a major platform that's TRYING. Mistakes will be made,
         | but let's at least attempt at moving forward.
        
         | uconnectlol wrote:
         | how is giving into the FUD a step in the right direction? this
         | policy is big musk energy, although google has had no coherent
         | energy ever other than the only one thing they can do
         | consistently which is be a corporation.
         | 
         | remember your little cookie concerns you "superusers" had? what
         | was the result of that? every website just broke without JS for
         | 10 years (until recently, now they just omit the popup without
         | JS). and if you DO have JS, you have to wait an extra 10-god
         | knows how long seconds to wait for the page to load and give
         | you some completely bullshit menu that is engineered to steer
         | you to "accept all" just as I "predicted" the very nanosecond i
         | heard HNers saying how good it is to make the government make
         | companies cater to privacy pedants.
         | 
         | if AI can social engineer you, you can get SEd in any other
         | way. AI stuff added absolutely zero to the "threat model" (i
         | have to use this word to fit into you twitter infosec dudes)
         | other than an increased rate of exploitation.
         | 
         | government approved truths can be served without the
         | possibility of AI influence. if you want government attestation
         | to a cryptographic root key just ask for it, don't beat around
         | the bush. we could have had that 30 years ago too. like when
         | you enter america they give you the root public key kind of
         | thing. that's of course not very useful from a cryptoanarchist
         | perspective, but it's still infinitely better than whatever you
         | people will come up with.
         | 
         | before that microscopic period in history people were dumb
         | enough to automatically consider video / audio as absolute
         | proof, there was only word of mouth.
         | 
         | i swear you people don't even think things through for even a
         | second.
         | 
         | computer security is very simple, you google some software and
         | it gives you a virus every now and then, as one would naturally
         | expect when downloading from thousands of literal random
         | authors. nothing about that has changed since the 90s. you
         | people always allude to some force beyond this and never
         | address the actual problem, but alas that would require
         | technical competence as well as management skills
        
         | tkiolp4 wrote:
         | Basically, Google decides what's real and what's not. Cool.
        
           | pizzafeelsright wrote:
           | For at least a dozen years it would seem.
        
       | skybrian wrote:
       | I'm reminded of how banks require people to fill out forms
       | explaining what they're doing, where it's expected that criminals
       | will lie, but this is an easy thing to prosecute later after
       | they're caught.
       | 
       | Could a similar argument be applied here? It doesn't seem like
       | there is much in the way of consequences for lying to Google. But
       | I suppose they have other ways of checking for it, and catching
       | someone lying is a signal that makes the account more suspicious.
        
       | wslh wrote:
       | ELI5: what would be the difference if you use AI or it is a new
       | release of Star Wars? I understand that AI does not need proof-
       | of-work and that is the difference?
        
         | mvdtnz wrote:
         | Was this comment generated by the world's worst LLM? No idea
         | what you're asking.
        
           | wslh wrote:
           | I am not an AI, I am a person and in HN I look for being well
           | treated.
        
       | xyst wrote:
       | Self reporting. How useless. Wonder what legislation they are
       | minimally complying with
        
       | arduanika wrote:
       | Is there a word missing from the title here? Requires whom?
        
         | jbiason wrote:
         | Same. For a second, I thought YouTube made a rule that YouTube
         | is now required to flag the AI videos created by YouTube.
        
         | samatman wrote:
         | The title was editorialized, which people do far more often
         | than they should. The original title, with the domain name next
         | to it, would have been fine.
        
       | yoavz wrote:
       | Most interesting example to me: "Digitally altering audio to make
       | it sound as if a popular singer missed a note in their live
       | performance".
       | 
       | This seems oddly specific to the inverse of what happened
       | recently with Alicia Keys from the recent Superbowl. As Robert
       | Komaniecki pointed out on X [1], Alicia Keys hit a "sour note"
       | which was silently edited by the NFL to fix it.
       | 
       | [1] https://twitter.com/Komaniecki_R/status/1757074365102084464
        
         | frays wrote:
         | This is a great example as a discussion point, thank you for
         | sharing.
         | 
         | I will be coming back to this video in several months time to
         | check whether the "Altered or synthetic content" tag has
         | actually been applied to it or not. If not, I will report it to
         | YouTube.
        
           | ryandrake wrote:
           | Yea, it's a really super example!
           | 
           | However autotune has existed for decades. Would it have been
           | better if artists were required to label when they used
           | autotune to correct their singing? I say yes but reasonable
           | people can disagree!
           | 
           | I wonder if we are going to settle on an AI regime where it's
           | OK to use AI to deceptively make someone seem "better" but
           | not to deceptively make someone seem "worse." We are entering
           | a wild decade.
        
         | elpocko wrote:
         | Digitally altering audio to make it sound as if a popular
         | singer hit a lot of notes is still fine though.
        
           | yoavz wrote:
           | Correct, it's the inverse that requires disclosure by
           | Youtube.
           | 
           | Still, I find it interesting. If you can't synthetically
           | alter someone's performance to be "worse", is it OK that the
           | NFL synthetically altered Alicia Key's performance to be
           | "better"?
           | 
           | For a more consequential example, imagine Biden's marketing
           | team "cleaning up" his speech after he has mumbled or trailed
           | off a word, misleading the US public during an election year.
           | Should that be disclosed?
        
         | post_break wrote:
         | Oh no, is that going to mess up my favorite genre called
         | shreds? https://www.youtube.com/watch?v=1nAhQOoJTIA
        
       | RobotToaster wrote:
       | If it's realistic who will know?
        
         | simion314 wrote:
         | >If it's realistic who will know?
         | 
         | Look at "realistic" photos , it is easy for someone with
         | experience to spot issues, the hangs/fingers are wrong, shadows
         | and light are wrong, hair is weird, eyes have issues. In a
         | video there are much more information so much more places to
         | get things wrong, making it pass this kind of test will be a
         | huge job so many will not put the effort.
        
       | lampiaio wrote:
       | AI that is indistinguishable from reality is a certainty for the
       | not-so-distant future.
       | 
       | That future _will_ come, and it will come sooner than anyone 's
       | expecting.
       | 
       | Yet all I see is society trying to prevent the inevitable from
       | installing itself (because it's "scary", "dangerous", "undermines
       | the very pillars of society" etc.), instead of preparing itself
       | for when the inevitable occurs.
       | 
       | People seem to have finally accepted we can't put the genie back
       | in the bottle, so now we're at the stage where governments and
       | institutions are all trying to look busy and pass the image of
       | "hey, we're _doing_ something about it, ok? You can feel safe ".
       | 
       | Soon we will be forced to accept that all that wasted effort was
       | but a futile attempt at catching a falling knife.
       | 
       | Maybe the next idiom in line will be "crying over spilled milk",
       | because could someone point me to what is being done in terms of
       | "hey, let's start by directly assuming a world in which anyone
       | can produce unrestricted, genuine-looking content will soon come
       | and there's no way around it -- what then?"
       | 
       | All I see is a meteor approaching and everyone trying to divert
       | it, but no one actually preparing for when it does hit. Each day
       | that passes I'm more certain that we will we look at each other
       | like fools, asking ourselves "why didn't we focus on preparing
       | for change, instead of trying to prevent change"?
        
         | skobes wrote:
         | What sort of preparations do you recommend?
        
           | lampiaio wrote:
           | There's a saying in my local language that people usually say
           | to someone who's going through a breakup or going through an
           | unfair situation:
           | 
           |  _" Accept it, it hurts less"_.
           | 
           | I'm not saying it makes the actual situation any better; it
           | obviously doesn't. But anyone can feel the rarefied AI panic
           | in the air growing thicker by the minute, and panic will only
           | make the situation worse both before and after absolute
           | change takes place.
           | 
           | When we don't accept incoming change before it arrives, we
           | surely are forced to accept it _after_ it arrives, at a much
           | higher price.
           | 
           | You asked about preparations: prepare yourself to see
           | governments try (and fail) to regulate what processing power
           | can be acquired by consumers. Prepare yourself for the
           | serious proposal of "truth-checking agencies" with certified
           | signatures that ensure _" this content had its chain of
           | custody verified as authentic from its CMOS capture up to its
           | encoded video stream"_, in which a lot of time and effort
           | will be wasted (there's already people replying about this,
           | saying metadata and/or encryption will come to the rescue via
           | private/public keys. Supposedly no will would ever film a
           | screen!).
           | 
           | The above might seem an exaggeration, but ask yourself: the
           | YouTube guidelines this post is about, the recent EU
           | regulation... do you think those are enough? Of course
           | they're not. They will keep trying to solve the problem from
           | the wrong end until they are (we are) forced to accept
           | there's nothing that can be done about it, and that it is us
           | who need to adapt to live in such a world.
           | 
           | Enjoy the ride, I suppose.
        
         | alex_duf wrote:
         | What would you do then if you could prepare for a world where
         | it's already here? Where the asteroid already hit, to use your
         | own metaphor.
        
         | cush wrote:
         | There are some interesting hardware solutions from camera
         | makers that provide provably authentic metadata and watermarks
         | to videos and images - mostly useful for journalists, but soon
         | consumers will expecting this to be exposed on social media
         | platforms and those they follow on social media. There really
         | are genuinely valuable things happening in this space.
        
           | samatman wrote:
           | This will always be spoofable by projecting the AI content
           | onto the sensor and playing it to the microphone. Which will
           | give the spurious content a veneer of authenticity, this is
           | within reach of a talented malicious amateur, and would be
           | trivial for nationa-state actors to do at scale.
        
             | lampiaio wrote:
             | Thank you for pointing that out, I want to reply to
             | everyone here but I don't think I have it in me to fight
             | this battle. It seems my initial message of "have we
             | questioned ourselves what we'll do should the
             | countermeasures fail?" fell on deaf ears. I asked a very
             | simple question: "what will we do / should we do when faced
             | with a world in which no content can be trusted as true",
             | and most replies just went on to list the countermeasures
             | being worked on. I will follow my own advice and simply
             | accept that is how the band plays.
        
             | cush wrote:
             | Of course. I don't think anyone is going to be arguing that
             | content captured by these cameras is real, it's that the
             | content is captured by the owner of that specific camera.
             | There always needs to be some aspect of trust, and the
             | value comes in connecting that with a trusted identity. Eg
             | one couldn't embed the CSPAN watermarks from a non-CSPAN
             | camera.
        
         | survirtual wrote:
         | We've been preparing for a while? It's all that work people
         | have been doing for years with asynchronous cryptography, ecc,
         | and tech like what happens during heavy rain downpours and that
         | coin with a bit in front of it.
         | 
         | These are all the proper preparation for AI. AI can't generate
         | a private key given a public key. AI can't generate the
         | appropriate text given a hash.
         | 
         | So we build a society upon these things AI can't do.
         | 
         | It has been a good run. We have done things like the tried and
         | true ink stamping to verify documents. We have a labyrinth of
         | bureaucracy for every little activity, mostly because it is the
         | way that has always worked. It has surely been nice for the
         | "administration" to sit around and sip lemonade in their
         | archaic jobs. It has been nice to have incompetent people with
         | no vision being appointed to high places for being born into
         | the right families connected with the right people. That gravy
         | train was surely a joy for those who were a part of it.
         | 
         | Sadly, it won't work anymore. We will need competent people now
         | that actually care.
         | 
         | We need everything to be authenticated now with digital
         | signatures.
         | 
         | It is not even that difficult a problem to solve. The existing
         | systems are far more complex, far more prone to error, far more
         | expensive, and far more difficult to navigate.
         | 
         | AI is giving us an opportunity to evolve. It is a time for
         | celebration. Society will be faster, more efficient, more
         | secure, and much more fun with generative content. AIs will
         | produce official AI-signed content, and unsigned content.
         | Humans will produce official human signed content, and unsigned
         | content. Some AIs will use humans to sign content to subvert
         | systems. But all of this pales in comparison to the fraud,
         | waste, and total abuse of the current system.
        
         | refulgentis wrote:
         | Forgive me on an initial reading, it is hard to have a nuanced
         | discussion on this stuff without coming off like an uncaring
         | caricature of one of two stereotypes, or look like you're
         | attacking your interlocutor. When I'm writing these out, it's
         | free association like I'm writing a diary entry, not as a
         | critique of your well-reasoned and 100% accurate take.
         | 
         | Personal thoughts:
         | 
         | - we're already a year past the point where it was widely known
         | you can generate whatever you want, and get it to a reasonable
         | "real" threshold with less than a day worth of work.
         | 
         | - the impact is likely to be significantly muted, rather than
         | an exponential increase upon, a 2020 baseline. professionals
         | were capable of accomplishing this with a couple orders of
         | magnitude more manual work for at least a decade.
         | 
         | - in general, we've suffered more societally from
         | histrionics/over-reactions to being bombarded with the same
         | messaging
         | 
         | - it thus should end up being _net good_, in that a skeptic has
         | a 100% accurate argument for requiring more explanation than
         | "wow look at this!"
         | 
         | - I expect that being able to justify / source / explain things
         | will gain significant value relative to scaled up distributors
         | giving out media someone else gave them without any review.
         | 
         | - something I've noticed the last couple years is people
         | __hate__ looking stupid. __Hate__. They learn extremely quickly
         | to refactor knowledge they think they have once confronted in
         | public, even by the outgroup, as long as theyre a non-
         | extremist.
         | 
         | After writing that out, I guess my tl;Dr as of this moment and
         | mood, is there will be negligible negative effects, we already
         | reached a nadir of unquestioned BS sometime between 2010 and
         | 2024, and a baseline be _anyone_ can easily BS will lead to
         | wide acceptance of skeptical reactions, even within ingroups.
         | 
         | God I hope I'm right.
        
           | lampiaio wrote:
           | I like the outlook you build through your observations, and I
           | acknowledge the possible conclusion you arrive at as
           | plausible. I do, however, put a heavier weight on your first
           | point because I see what we have today in terms of
           | image/video generation as very rudimentary compared to what
           | we'll have in a couple years. A day's worth of work for a
           | 100% convincing, AI-generated video immune to the most
           | advanced forensics? We'll soon have it instantaneously.
           | 
           | Thank you for the preface you wrote, I completely understand
           | your point of how easy it is to sound like a contrarian
           | online, I'm sure my writing style doesn't help much on that
           | front I'm afraid to admit.
        
       | cush wrote:
       | Ah it's better than nothing!
        
         | paulpauper wrote:
         | Scammers have been making fake content on youtube since its
         | founding. And youtube has never even pretended so much as to
         | care about doing anything about it.
        
       | qwertox wrote:
       | How about first removing those crypto-scam channels which pop up
       | whenever something big happens at SpaceX.
        
       | yoavz wrote:
       | I am not envious of the policy folks at Youtube who will have to
       | parse out all the edge cases over the next few years. They are up
       | against a nearly impossible task.
       | 
       | https://novehiclesinthepark.com/
        
         | rchaud wrote:
         | It's not like there are any real consequences if they don't get
         | it right. Deepfake ads already exist on YT.
        
       | 111111101101 wrote:
       | We can't have the proles misrepresenting reality the same way
       | that the rich have been doing for the last century. Rules for
       | thee but not for me.
        
         | paulpauper wrote:
         | We cannot have fake content on youtube now! No way.
        
       | rchaud wrote:
       | Google of yore would have offered a 'not AI' type of filter in
       | their advanced search.
       | 
       | Present day Google is too busy selling AI shovels to quell Wall
       | St's grumbling, to even consider what AI video will to do to the
       | already bad 'needle in a haystack' nature of search.
        
       | duxup wrote:
       | Going to be a long road with this kinda thing but forums and
       | places I visit often already have "no AI submissions" type rules
       | and they have been received pretty well that I've seen.
       | 
       | Are they capable of enforcing it? I don't know, but it's clear
       | users understand / don't like the idea of being awash in a sea of
       | AI content at this point.
       | 
       | If they can actually avoid it remains to be seen.
        
       | asciimov wrote:
       | Will this cover all those product "review" videos that are
       | clearly reading some copy or amazon reviews?
        
       | dmje wrote:
       | I suspect it'll get me downvoted but this newish trend of using
       | this grammar syntax drives me nuts. It's "YouTube now requires
       | YOU to" not "YouTube now requires to". It's lazy, it's
       | grammatically incorrect and it doesn't scan.
        
       | dwighttk wrote:
       | Title seems to be missing "creators"
        
       | elif wrote:
       | based on how it takes them 48+ hours to take down fake elon musk
       | crypto doubling scams that get reported, i doubt this will help
       | anyone.
        
       | idatum wrote:
       | > Altering footage of real events or places: Such as making it
       | appear as if a real building caught fire, or altering a real
       | cityscape to make it appear different than in reality.
       | 
       | What about the picture you see before clicking on the actual
       | video? This article of course is addressing the content of the
       | videos, but I can't help but look at the comically cartoonish,
       | overly dramatic -- clickbait -- picture preview of the video.
       | 
       | For example, there is a video about a tornado that passed close
       | to a content author and the author posts video captured by their
       | phone. In the preview image, you see the author "literally
       | getting sucked into a tornado". Is that "altered and synthetic
       | content"?
        
       | Devasta wrote:
       | I hope this allows me to filter them entirely. If it wasn't worth
       | your time creating it, its not worth my time looking at it.
       | 
       | I am generally very skeptical of these tags though, I suspect a
       | lot of them are in place to stop an AI consuming its own output
       | rather than any concern for the end user.
        
         | munificent wrote:
         | _> If it wasn 't worth your time creating it, its not worth my
         | time looking at it._
         | 
         | God, I wish I could beam this sentence directly into the brain
         | of every single person breathlessly excited about using gen AI
         | to be "a creative".
        
       | paul7986 wrote:
       | All websites and all for profit AI companies must add and then
       | display AI watermarks otherwise nothing can truly believed online
       | and offline too
        
       | thomastjeffery wrote:
       | It's as if everyone in the world just forgot that fraud has been
       | illegal the whole time.
        
       | lawlessone wrote:
       | So how do I report the ones that don't?
       | 
       | I have a whole lot of shorts content to report..
        
       | RyEgswuCsn wrote:
       | This is somewhat expected to be honest. I am rather pessimistic
       | on the future solutions to such issues though. I can see only one
       | possibility going forward: camera sensor manufactures will either
       | voluntarily or forcibly implement hardware that inject
       | cryptographic "watermarks" to the videos produced by their
       | cameras. Any videos that do no bear valid watermarks are
       | considered potentially "compromised" by GenAI.
        
       | dbg31415 wrote:
       | This will just result in a pop-up before every video, like the
       | cookie warnings, "Viewers should be aware that this video may
       | contain AI-generated or AI-enhanced images." And it'll be so
       | annoying...
        
       | omoikane wrote:
       | > Creators must disclose content that [...] Generates a
       | realistic-looking scene that didn't actually occur
       | 
       | This may spoil the fun in some 3D rendered scenes. For example, I
       | remember there was much discussion on whether a robot throwing a
       | bowling ball was real or not[1].
       | 
       | Part of the problem has to do with all the original tags (e.g.
       | "#rendering3d") being lost when the video spread through various
       | platforms. The same problem will happen with Youtube -- creators
       | may disclose everything, but after a few rounds through reddit
       | and back, whatever disclosure and credit that was in the original
       | video will be lost.
       | 
       | [1] https://twitter.com/TomCoben/status/1146431221876105216
       | 
       | https://twitter.com/TomCoben/status/1147870621713543168
        
       | brikym wrote:
       | YouTube now requires to label their realistic-looking videos made
       | using AI *
       | 
       | * Unless you're a powerful state actor then your videos are
       | always 'real'.
        
       | whoopdedo wrote:
       | I'd like a content ID system for AI generated media. If someone
       | tries to pass an image to me as authentic I can check its hash
       | against a database that will say "this was generated by such-and-
       | such LLM on 18 Mar 2024." Maybe even add a country of origin.
        
         | zhoujianfu wrote:
         | These guys are doing something sort of in that vein..
         | https://wolfsbane.ai/
        
       | meindnoch wrote:
       | Or else?
        
       | twodave wrote:
       | I've said before that we're entering an age where no online
       | material is truly verifiable without some kind of hardware
       | signing (and even that has its flaws). Public figures will have
       | to sort out this quagmire before things get even uglier than they
       | are. And I really hope that's the biggest problem of the next
       | decade or so, rather than that we achieved AGI and it decided we
       | were inferior.
        
       | airspresso wrote:
       | No mention of clearly labeling ads made using AI. The deepfake
       | Youtube ads are so annoying. Elon wants to recruit me to his new
       | investment plot? Yeah right.
        
       | stevage wrote:
       | I predict that this kind of labelling will disappear before long
       | and in a couple of years will look ridiculous.
        
       | micheljansen wrote:
       | The cynic in me thinks this is just Google protecting their
       | precious training data from getting tainted but I'm glad their
       | goals align with what's better for consumers for once.
        
       | sheepscreek wrote:
       | While their intentions are good, the solution isn't. There's a
       | lot that they have left to the subjectivity of the creators.
       | Especially for what is "clearly unrealistic".
        
       | scotty79 wrote:
       | This label will be mostly misleading. Absence of the tag will
       | give false sense of veracity and presence of it on non-ai
       | generated materials will discredit them.
       | 
       | Fact checking box like on twitter would be better and if you
       | can't provide it, don't pretend you know anything about the
       | content.
        
       ___________________________________________________________________
       (page generated 2024-03-18 23:00 UTC)