[HN Gopher] Drowning in AI Generated Garbage: the silent war we ...
       ___________________________________________________________________
        
       Drowning in AI Generated Garbage: the silent war we are fighting
        
       Author : bertman
       Score  : 427 points
       Date   : 2022-12-05 11:40 UTC (11 hours ago)
        
 (HTM) web link (ploum.net)
 (TXT) w3m dump (ploum.net)
        
       | causality0 wrote:
       | Call me a cynic but I don't see how AI-generated drivel is any
       | worse than human-typed bullshit.
        
         | pixl97 wrote:
         | https://en.wikipedia.org/wiki/Brandolini%27s_law
        
         | Retr0id wrote:
         | It takes time for a human to type bullshit. AI can generate
         | mountains of bullshit before a human author has got out of bed.
        
         | otikik wrote:
         | It's worse in the same way an atomic bomb is worse than
         | dynamite. It's orders of magnitude faster and cheaper to
         | generate.
        
       | greenbit wrote:
       | Frank Herbert was onto something with that line "Thou shalt not
       | make a machine in the likeness of a human mind."
        
         | emj wrote:
         | "Butlerian Jihad" that was the most surprising for me reading
         | Dune that they had a limit on how smart computers could be. The
         | movie was very vague on the subject, and was more of a Star
         | Wars like lo-fi.
        
           | jhbadger wrote:
           | It was even more surprising if you realize that "Butlerian"
           | is referring to the 19th century writer Samuel Butler
           | (although, yes, Herbert later invented a character to explain
           | the name in-universe). Samuel Butler wrote an 1863 essay
           | "Darwin Among the Machines" which predicted that machines
           | would improve in a way analogous to natural selection and
           | would so so faster than humans could evolve, and therefore
           | they might eventually surpass humanity.
        
             | LesZedCB wrote:
             | indeed, and selection pressure is not on the domain of
             | physical bodies like for us fleshy species, but
             | profitability for digital computational species in the
             | domain of capital.
        
       | hsuduebc2 wrote:
       | This is a different kind of dystopia than I expected
        
       | RHSman2 wrote:
       | Best way to get ahead these days is actually not be in AI ;)
        
         | jagaerglad wrote:
         | Could you please elaborate why you think this, I'm currently
         | changing paths of my education from AI oriented to Software
         | development oriented
        
           | shagie wrote:
           | The quip / aphorism is a commentary on that a few years ago,
           | AI was about being on the cutting edge - the things that were
           | going to be seen _next year_.
           | 
           | The work of AI researchers were about envisioning the future
           | of what could be.
           | 
           | Well, now what we've got is something that consumes
           | everything and outputs aesthetically pleasing means of the
           | content that it has consumed.
           | 
           | However, that "consumes everything" means that its always
           | reactive and working from data that is in the past.
           | 
           | There is no creativity in it. This doesn't mean that it won't
           | inspire a human, and it _does_ employ a lot of fancy tricks
           | too (I 'm seriously impressed by its ability to create
           | limericks of made up words with a coherent definition).
           | 
           | However, the AI can never be "ahead" of the curve because it
           | can only produce what it has consumed.
           | 
           | This isn't to say a lot of the work that we do in software
           | development isn't doing what has been done again and again
           | and again (see also CRUD), but the moment you're given a
           | novel problem or need to maintain an existing solution and
           | encounter a new bug, the "can an AI do it" rapidly fails at
           | those tasks as it lacks the domain knowledge necessary.
           | 
           | -----
           | 
           | Btw, I'm going to suggest giving Accelerando a read (
           | https://www.antipope.org/charlie/blog-
           | static/fiction/acceler... ). It has an interesting take on
           | it...
           | 
           | > ... Anyway, Matrioshka brains, the end product of a
           | technological singularity - they're bandwidth-limited. Sooner
           | or later the posthuman descendants evolve Economics 2.0, or
           | 3.0, or something else and it, uh, eats the original
           | conscious instigators. Or uses them as currency or something.
           | The end result we found is a howling wilderness of degenerate
           | data, fractally compressed, postconscious processes running
           | slower and slower as they trade storage space for processing
           | power. ...
           | 
           | (the part that reminded me of it is the GP post's comment...
           | and the intro:
           | 
           | > Manfred pauses in mid stride, narrowly avoids being mown
           | down by a GPS-guided roller blader. This is getting weird
           | enough to trip his weird-out meter, and that takes some
           | doing. Manfred's whole life is lived on the bleeding edge of
           | strangeness, fifteen minutes into everyone else's future, and
           | he's normally in complete control - but at times like this he
           | gets a frisson of fear, a sense that he might just have
           | missed the correct turn on reality's approach road. "Uh, I'm
           | not sure I got that. Let me get this straight, you claim to
           | be some kind of AI, working for KGB dot RU, and you're afraid
           | of a copyright infringement lawsuit over your translator
           | semiotics?"
           | 
           | )
           | 
           | It takes an interesting path of the acceleration (note that I
           | don't agree with the essence of the slope but instead believe
           | we are seeing the product of a lot of sigmoid curves with the
           | slope near a maximum - but as the easy stuff gets done, the
           | (heh) second derivative gets negative as harder and harder
           | problems need to be solved
           | https://towardsdatascience.com/derivative-of-the-sigmoid-
           | fun... ). We are seeing the seeds of ideas sown decades ago
           | sprouting in rapid succession - and get to the maximum where
           | things are neat and everyone is going "whee!" but the next
           | step of maintaining that growth will take time and much more
           | effort than sowing the ideas.
           | 
           | Lots of people are working on the first half of the sigmoid
           | curve... but people lose interest or resources when it comes
           | to the second half and so you get a lot of things that are
           | abandoned at or near where things started getting hard.
        
       | neilv wrote:
       | I've been known to start ranting when I'm searching on some topic
       | that attracts SEO garbage (e.g., Python), and realize I'm in the
       | nth different SEO garbage article search hit.
       | 
       | I wonder whether a winning solution to the coming massively-
       | parallel "AI" megaspew will emerge, how it will work, and whether
       | Google will be the ones to do it.
       | 
       | Or will the bulk of consumers overwhelmingly submit, and accept
       | it as normal, much like we already do with huge regressions in
       | the quality of interactions and thinking?
        
         | CuriouslyC wrote:
         | The solution is to create datasets that contain data and
         | aesthetic judgment metadata from reputable sources together,
         | and train models to perform aesthetic judgment. This will both
         | filter out spam/low quality content, and provide a conditioning
         | tool for generative models to improve their output. Even
         | better, the arms race this will kick off will create a virtuous
         | cycle that will push progress in AI.
         | 
         | The only problem with aesthetic judgment models is that (much
         | like human aesthetic judges) it will sometimes give poor scores
         | to things that are dissimilar to the data it was trained on,
         | even though many people might find those things aesthetically
         | appealing.
        
           | pixl97 wrote:
           | >from reputable sources
           | 
           | What does this even mean. Reputable sources tends to fail the
           | 'chinese room' experiment because you can never tell if your
           | source is reputable or just faking being reputable (or later
           | becomes corrupted).
        
             | CuriouslyC wrote:
             | Well, if we were talking about food, I'd say Gordon Ramsay,
             | David Chang, Anthony Bourdain or The New York Times could
             | be considered reputable, while a Yelp reviewer with a
             | handful of reviews could be considered disreputable.
             | 
             | Ultimately you can boil it down to: Trust sources when they
             | make statements that are later observed by me to be true,
             | or are trusted by other sources that I trust. The negative
             | feedback loop boils down to: If a source made a statement
             | that I later found to be untrue, or extended trust to an
             | untrustworthy source, reduce that source's trust.
        
               | pixl97 wrote:
               | The issue here is you have pre-AI reputation sources, the
               | problem is for post-AI content generation how are you
               | supposed to find this content in a nearly infinite ocean
               | of 'semi-garbage'. Anthony is not making new content, and
               | one day the other people will expire. In the meantime a
               | million 'semi-trustable' AI sources of content with
               | varying reach depending on how they've been promoted will
               | take over those markets.
               | 
               | There are any number of particular problems here that we
               | already know do not mesh well with how humans think.
               | You'd start your AI source 'true', build up a following,
               | and then slowly them into the Q-anon pit of insanity.
               | Most people will follow your truth and fight anything
               | that questions it.
        
         | fnordpiglet wrote:
         | Here's a solution:
         | 
         | Go ask ChatGPT your question about python.
         | 
         | It's spew is probably better than the SEO spew.
         | 
         | I feel like species traitor taking the side of the AI, but I
         | think this is probably a decent way of answering a lot of
         | questions based on a factual and rational basis that have been
         | answered multiple times. A lot of hoopla is made over the fact
         | you can trick it with a false premise into giving a false
         | answer (I'd note the false premise makes all answers true), but
         | if it can simulate a Linux box reasonably well, it can answer
         | your questions in python module imports better than sifting
         | through page rank collections of SEO crap.
         | 
         | Edit: I also think this doesn't undermine the value of stack
         | overflow. Bots answering there probably does. But stack
         | overflow has always struggled with simple redundant questions
         | and trying to curate them away in favor of new and interesting
         | questions, and promoting novel and well explained answers.
         | ChatGPT probably won't do that for you. But it can load shed
         | the redundant stuff highly effectively. And, over time, it'll
         | get more nuanced as more nuanced questions and answers are
         | incorporated. But the "I need human help" button is still there
         | and thank goodness. Maybe the load shedding will enhance the
         | experience on SO for everyone?
        
           | visarga wrote:
           | The solution will be to chain Google with chatGPT. A system
           | that allows chatGPT to perform Google searches and use the
           | retrieved documents as reference when formulating the
           | response. This means we don't need to actually see Google or
           | the website, but still have grounded responses with
           | references.
           | 
           | Instead of Google it could also be a big dump of text, like
           | 1TB of well curated web text, so with local search you get
           | more privacy and speed. And instead of chatGPT we will use
           | "chatStability", a LM running on the local machine, in
           | privacy. This could be the end of ads and the creation of a
           | new private creative space for people.
           | 
           | <rant>I think Google is in big trouble and that explains why
           | they are second when it comes to LLMs - this line of
           | development will make it easier to escape their ads and
           | tracking. There's no profit in it for them.
           | 
           | Even though the Transformer was invented at Google 5 years
           | ago, from all the authors only one still works there, most of
           | them have left to implement their own visions, a bunch of
           | startups. Why have they left Google if they wanted to
           | innovate? Because they could not do it inside. Google got to
           | protect the current revenue streams first.</>
           | 
           | https://analyticsindiamag.com/how-the-authors-behind-the-
           | tra....
        
           | moe091 wrote:
           | Going off your point, even if we aren't quite there yet,
           | there is no reason to suppose that AI won't eventually be
           | massively better at basically ALL things than humans - that's
           | certain the direction we're headed, and very quickly too.
           | 
           | This is just the very beginning of the problem. It's time for
           | people to start thinking about how to live in a world where
           | they are "obsolete". All jobs, or, being conservative, at
           | least the vast, vast majority will eventually be replaced by
           | AI. AI will eventually be able to create art that is more
           | appealing to humans than human-created art. I think there
           | could eventually even be AI "people", who's company many
           | people will prefer to other actual humans, as they will be
           | designed/trained to be as perfect as possible for that task.
           | I hope that most people will still see more value in genuine
           | human interaction, as any person today probably would, but as
           | tech evolves and we get used to it, let it into our lives
           | more, let it control more of our lives and make more of our
           | decisions, then it advances more and we get used to it more,
           | etc. The idea of human connection and the separation between
           | human life and AI "life" might not be as apparent to people
           | as it is to us, and they may allow AI and technology to
           | replace every facet of human existence
        
             | fnordpiglet wrote:
             | I actually strongly disagree here with the conclusion you
             | make. I think these are going to be tools that enhance the
             | human mind. It'll enable the average person to be
             | exceptional by using the AI as guidance. It'll enable the
             | brilliant to exploit the AIs in novel and clever ways that
             | were not possible before. The humans place is in the use of
             | a tool, the tool doesn't replace the human.
             | 
             | The challenge will be instead is there enough useful work
             | to keep everyone paid in a post scarcity world? The advent
             | of the useful AI will probably necessitate some very very
             | serious thought about how we allocate and distribute
             | capital.
             | 
             | But I don't believe artists will be replaced by an AI any
             | more than they were by the camera. Or that streaming will
             | kill music. Etc.
        
         | nradov wrote:
         | Search engines like Google are probably a dying technology. The
         | next disruptive innovation in that space will be "answer
         | engines" that use GPT like technology to dynamically generate
         | specific answers to questions rather than linking users to
         | third-party sites. This will allow for controlling the entire
         | user experience and serving ads embedded in the generated
         | content.
        
           | visarga wrote:
           | I don't think there will be ads. We'll be able to run a local
           | chatBot just like we run SD.
        
           | angelbar wrote:
           | As https://www.wolframalpha.com/
        
         | pmontra wrote:
         | I almost decided to limit my searches to the search engines of
         | the authoritative sites. Examples in the case of Python: the
         | Python language, the Django framework, Stackoverflow (until it
         | will surrender to AI posts then we see what happens.) I'll end
         | up creating a home page with search boxes to those sites. Very
         | '90s
        
           | welterde wrote:
           | You could also put something like searx [1] in front to
           | search all at once (or at least provide unified interface for
           | all).
           | 
           | [1] https://searx.github.io/searx/
        
           | wjholden wrote:
           | Your post led me to an interesting little quick experiment.
           | 
           | So in 2020 I self-taught Python during Advent of Code. I
           | remember remarking that official python.org results seldom
           | made it to the top of Google searches. I was used to Java and
           | Julia searches taking me right to the official sites.
           | 
           | Anyways, so an experiment just search for "[language]
           | [feature]". I ended up with W3Schools as my top result for
           | "Java ArrayList", but Oracle JDK8 docs for "Java Socket".
           | "Python Socket" takes me to language reference but "Golang
           | Socket" does not.
           | 
           | Maybe there's a trend to be discovered.
        
           | ethbr0 wrote:
           | Ironically, the 90s solution (human-curated link collections)
           | to Internet-wide searches seems the easiest way out.
           | 
           | It's relatively easy for a human to tell the difference
           | between AI copy (as pollutes the web) and actual content.
           | 
           | It's also strange that Google et al. haven't built human-in-
           | the-loop flagging of spam content into their search UX ("Is
           | this site low quality?"). They could get around flag spam via
           | reputation, given that Google knows identity for most of its
           | users (because they're logged in or via IP across their
           | properties).
           | 
           | Side note: In cursory research on current state, I came
           | across the Chinese phrase "human flesh search engine" [0],
           | which I wonder if it sounds as cyberpunk in the original
           | Chinese.
           | 
           | [0] https://en.m.wikipedia.org/wiki/Human_flesh_search_engine
        
             | concordDance wrote:
             | > It's relatively easy for a human to tell the difference
             | between AI copy (as pollutes the web) and actual content.
             | 
             | It is now. It definitely won't be when gpt gets popular.
        
               | wizzwizz4 wrote:
               | It will be: GPT output is _usually_ either garbage,
               | repetitive, very wrong, subtly wrong, or inconsistent in
               | writing style in a way humans aren 't. (It sort of starts
               | trying to say something different, in the "same style" as
               | it started with, then switches style _later_.)
               | 
               | When humans really can't tell the difference, well...
               | https://xkcd.com/810/ seems appropriate.
        
               | concordDance wrote:
               | The GPT output I've been making using openAI's chatpot is
               | indistinguishable from what the bottom 70% of
               | commentators put out.
               | 
               | But an environment glutted with that content is also
               | worthless. I read for the top 10% that has genuinely
               | novel thoughts.
        
               | bitL wrote:
               | Without revealing the origin, the latest models tested in
               | private betas solve many of the points you mentioned.
        
               | wizzwizz4 wrote:
               | This is why I'm not leaking _all_ my discriminating
               | heuristics.
               | https://en.wikipedia.org/wiki/Goodhart%27s_law strikes
               | again.
        
               | ethbr0 wrote:
               | At some point you also run up against the virus vs immune
               | system endgame: it's evolutionarily inefficient to
               | optimally outwit an adversarial system.
               | 
               | Usually there exists a local maximum where you obtain
               | most of the value with less-than-full effort.
               | 
               | Sadly, the centralization of search directly cuts against
               | that. I.e. outwitting Google search = $$$$ vs outwitting
               | 1-or-4 equally-used search engines = $. :(
        
               | vsareto wrote:
               | https://i.imgur.com/kBGRUqL.jpg
        
               | xdennis wrote:
               | In most cases, judging if something was created by GPT
               | takes quite a long while. It's not a search result you
               | can easily dismiss.
        
             | fmajid wrote:
             | > It's relatively easy for a human to tell the difference
             | between AI copy (as pollutes the web) and actual content.
             | 
             | No:
             | 
             | https://en.wikipedia.org/wiki/Sokal_affair
        
               | ethbr0 wrote:
               | The Sokal affair says more about the dynamics of academic
               | publishing than it does about humans' ability to key to
               | nuances.
               | 
               | And "as pollutes the web" -> we're not talking about
               | state of the art.
        
       | RamblingCTO wrote:
       | We're already drowning in human generated garbage on the web.
       | Sure, AI makes it worse, but we can't blame anything but our
       | human compatriots and ourselves.
        
         | nottorp wrote:
         | From what quotes i've seen posted about this ChatGPT it's a
         | more readable version of the manually done SEO spam pages.
         | 
         | I might have to check that paid search engine...
        
         | YurgenJurgensen wrote:
         | When you think about it, since most of that "human generated
         | garbage" is mainly trying to appease recommendation algorithms,
         | which are mostly AI these days, even the human generated
         | garbage is actually AI generated garbage by proxy.
         | 
         | We can't kill the machine, but we can starve it. None of this
         | would exist without advertising. If we kill advertising we kill
         | the content farms. And nothing of value will be lost.
        
         | boringg wrote:
         | But everybody has an important viewpoint that they need to
         | spread themselves and that you haven't or someone else hasn't
         | already elucidated /s.
        
       | jagaerglad wrote:
       | All this ChatGPT hysteria led me to have the following
       | conversation with it. Probably hit some near future nail on its
       | head. "Hello AI, please do today's tasks", the programmer will be
       | on the level of the call center operator of today in terms of
       | skills
       | 
       | * Human: Explain to me the phenomenon of post-ChatGPT depression
       | seen in a software developer
       | 
       | * ChatGPT: [...] After using the ChatGPT tool, some developers
       | may experience feelings of depression, sadness, and loss. This
       | may be due to the realization that their job as a developer has
       | been reduced to using a tool that can be easily replicated by
       | others, leading to a feeling of redundancy and lack of
       | fulfillment in their work.
       | 
       | Additionally, the use of ChatGPT may also lead to a loss of
       | creativity and problem-solving skills, as the tool takes care of
       | the majority of the coding process. This can cause developers to
       | feel like they are no longer contributing to their team or making
       | a meaningful impact in their field.
       | 
       | In short, post-ChatGPT depression is a psychological reaction to
       | the use of a tool that simplifies the coding process, leading to
       | feelings of redundancy, lack of fulfillment, and loss of
       | creativity in software developers.
        
         | lezojeda wrote:
         | What does this have to do with the article?
        
           | dragontamer wrote:
           | We can start expecting this kind of garbage to start
           | polluting our discussion spaces.
        
           | jagaerglad wrote:
           | Well for one, it is AI generated garbage in the comments of
           | an otherwise well curated website, HN. That's too meta
           | perhaps. I just wanted to share my feelings out loud of this
           | whole showdown, thinking other might feel the same and share
           | what they think.
           | 
           | The bot replied with feelings of being replaced, but feelings
           | of eternal september2, eternal 2023, are equally valid signs
           | of this hypothetical post-ChatGPT depression, probably in a
           | closer distance in the future than the former, and besides,
           | more relevant to the article, not just the subject as a whole
        
         | [deleted]
        
       | vvvvvsvvs wrote:
       | Friendly reminder that the ending of the video game Metal Gear
       | Solid 2 predicted this way back in 2001
       | 
       | https://www.youtube.com/watch?v=eKl6WjfDqYA&t=20s
        
       | dsign wrote:
       | As others have pointed out, AI's are not needed to produce crappy
       | repetition which is eagerly gobbled by the masses, i.e.
       | Hollywood, Netflix, and werewolves and vampire stories by the
       | penny. And there is a huge audience for those.
       | 
       | What I think is going to happen is that even when I'm searching
       | for information in subject X, I might need to fight through a lot
       | of AI fluff to get anywhere. If that gets too bad, I may loose
       | the incentive to use the Internet altogether, or to use the open
       | internet. If I can't use the open Internet, I'll have to go to
       | the closed one, which by definition is under the control of
       | funded parties.
       | 
       | Today, we have a semblance of "verifiable personhood". Imagine
       | what will happen when we use AI to break that too. For example,
       | an AI could create a fictitious "Derek Mains" who post photos to
       | Instagram, has a Facebook account with photos from his childhood
       | all the way to his retirement, a profile in LinkedIn with a lot
       | of professional connections (AI fictions like him but also many
       | real people that the AI gulled), and even a Github repository
       | from the time he tries to create a cooking recipes website, full
       | of PHP and Python code. Derek Mains is now traveling the world
       | and tasting new types of food, posting regular updates on
       | Twitter/Tumblr/Mastodon, _and_ Wikipedia, where he edits food-
       | related articles. Not very often compared with everything he does
       | online, but once in a while he engages with other people(?) in
       | Twitter where he is partial to party X.
       | 
       | What would real people do when they discover that the Internet is
       | full of "Derek Mains"? How do governments and power factions will
       | use this? Wouldn't be sweet to own the closed Internet when the
       | open one is a wasteland?
        
         | pixl97 wrote:
         | >AI's are not needed to produce crappy repetition which is
         | eagerly gobbled by the masses
         | 
         | True, but humans that create this kind of crap still have to
         | sleep, eat, and generally not pay attention to feeding the
         | machine. meanwhile the AI terminator is out there, it can't be
         | bargained with, it can't be reasoned with, it doesn't feel pity
         | or remorse or fear, and it absolutely will not stop... EVER,
         | until you are _buried in an ocean of garbage content_.
        
       | watmough wrote:
       | Sadly the killer app of Facebook is to meet and enable precisely
       | the kinds of local community the author is suggesting.
       | 
       | In a sense, Marketplace is the new Facebook. A buddy is just
       | swinging by with some Rx-7 TII brakes for my NA. We have been
       | trading parts for the last couple years.
       | 
       | I was at a car show yesterday, who did I know, Facebook buds.
        
       | tetraca wrote:
       | Upload your fake text and images anywhere. Upload it everywhere.
       | Upload it as yourself. Upload it on behest of your willing
       | friends. Corrupt your face in dozens of ways, then tag each as
       | yourself. When everything is garbage, shadow profiles will mean
       | nothing, social credit scores will mean nothing, analytics will
       | mean nothing. Centralization can be turned on its head - the
       | watering hole will be revealed to be the dangerous cesspit it is.
       | Create art until what can be monetized has no value. Create news
       | that never happened, scoops that are only a distant fiction,
       | until nothing on the internet can even be meaningfully reported
       | on. The corrosive powers of mass memetics for the first time can
       | be rendered entirely impotent.
        
         | drdaeman wrote:
         | While I honestly do want to support this approach because
         | unlike the alternative it might (just "might", though) work, I
         | regretfully must also admit that the huge problem with this is
         | collateral damage. There are millions of humans out there that
         | are either highly susceptible to or are already caught into
         | believing all sorts of nonsense, getting scammed in all forms,
         | costing them money, time, well-being and even lives. The issue
         | with fake flood is that until the saturation/breaking point
         | (when the understanding that existing trust system is fatally
         | broken becomes truly mainstream) it's going to make this
         | situation worse.
         | 
         | What's extremely frustrating is that the _only_ alternative I
         | 've ever seen proposed are forms of a never-ending moderation
         | arms race, some variant of a "war on fakes". Which is even
         | scarier in the long run, because while it mitigates the damages
         | here and now (good), it would never solve the underlying
         | problem, and only further cement some parts of it (enforced
         | trust invites potential abuse).
         | 
         | And as usual, humanity is currently stuck between two
         | incredibly shitty options (traditionally going through a path
         | of least resistance, no matter what could be at the end of it -
         | because that's how large systems work), hopefully still
         | searching for a magical third silver bullet.
        
           | tetraca wrote:
           | > There are millions of humans out there that are either
           | highly susceptible to or are already caught into believing
           | all sorts of nonsense, getting scammed in all forms, costing
           | them money, time, well-being and even lives. The issue with
           | fake flood is that until the saturation/breaking point (when
           | the understanding that existing trust system is fatally
           | broken becomes truly mainstream) it's going to make this
           | situation worse.
           | 
           | The people who are willing to engage in charlatanism and mass
           | manipulation are already enabled by this tooling and will use
           | AI as a force multiplier whether we use it to jam the signal
           | or not, and the results will be far, far worse for victims
           | because these people will actually have directed, dangerous
           | goals and agendas to push them toward rather than having them
           | flail in a contradictory trash pile.
        
           | BeFlatXIII wrote:
           | It's only a problem that large numbers are easily duped no
           | matter how much education we give them when it's also coupled
           | with no-questions-asked universal suffrage.
        
             | kashunstva wrote:
             | It depends in part on the quality of what is passed off as
             | education. After twelve years of US public education in the
             | 1970's and 80's, not once did critical thinking tools,
             | types and sources of bias, and default-mode skepticism come
             | up. I'd rather work on improving that, rather than making
             | it even harder for citizens to vote.
        
           | moe091 wrote:
           | I agree with your point, but also think this trajectory is
           | kind of inevitable in a way. As we can already see, a LOT of
           | people are living deep in a world of BS news stories and made
           | up facts, to the point where they're basically in a different
           | reality.
           | 
           | Ideally, we'd find a way to go backwards from where we are
           | now, maybe future generations will grow up with this flood of
           | false info and learn to navigate it, but I think we reached a
           | point where tech is developing so fast, and the world is
           | changing so fast, that there were always be a large number of
           | people who can't "keep up".
           | 
           | Given that, this may be one of the few scenarios where I'm in
           | favor of a kind of "accelerationism" - increase the problem
           | to a breaking point as soon as possible so that we can deal
           | with the fallout and recover from it now, rather than dealing
           | with this horrible state of things for decades and decades as
           | it slowly gets worse and worse, until we eventually reach a
           | breaking point anyway and likely have an even worse fallout
           | to recover from.
        
           | rcarr wrote:
           | If it leads to total chaos (which I think it's going to) I
           | could easily see governments trying to enact the following:
           | 
           | - A public blockchain or database for ID purposes, with every
           | ID traceable back to a real human. Legislation created that
           | mandates all websites implementing user profiles for any
           | social purpose to require an official ID on signup and every
           | user must be identifiable on the platform by their real name.
           | 
           | - Access to AI only under strict restrictions. Official apps
           | you're allowed to use with the government banning all others.
           | Big Tech will happily comply with the censoring because they
           | are the designated official AI app providers.
           | 
           | - Photorealistic AI and Video banned entirely because the
           | potential to incite violence through fake videos is too high.
           | Imagine Pizzagate or Jan 6th but someone made a Blair Witch
           | style video showing Biden and Pelosi killing a baby as
           | members of a secret cult. Or someone made a video of one of
           | their enemies committing animal abuse and sending it to
           | everyone in the town.
           | 
           | - Exams will make up the entirety of your grade in most
           | educational environments apart from those where AI cannot be
           | used for creating the coursework.
           | 
           | - Anyone caught using non-official AI or abusing AI will be
           | subject to harsh prison time akin to hacking state secrets.
           | 
           | Of course, the government and its agencies will have access
           | to all of this technology for their own purposes so they will
           | be able to create whatever they want to control you, your
           | thoughts and society's perception to be whatever they want it
           | to be. As this isn't really preferable to the complete chaos
           | it's trying to control, more and more people will try to
           | disconnect from the internet as much as possible. The IRL
           | economy will grow and there will be a resurgence in analogue
           | forms of media for counter culture types. There may even be a
           | resurgence of interest in blue collar trade jobs if white
           | collar jobs are decimated and wages plummet.
           | 
           | We are on the precipice of 1984.
        
         | nine_k wrote:
         | This reminds me of the great novel _Fiasco_ by Stanislaw Lem
         | [1].
         | 
         | The civilization described in the novel fills the entire radio
         | spectrum with noise, to make efficient communication impossible
         | (and possibly to also allow highly sophisticated noise-like
         | encrypted communication be undetectable).
         | 
         | Their society is pretty dystopian though.
         | 
         | [1]: https://en.wikipedia.org/wiki/Fiasco_(novel)
        
           | warpech wrote:
           | Also the book _The Cyberiad_ by Lem is now more relevant than
           | ever. It is a set of short stories, mostly about messing with
           | generative AI, AI assistants, etc.
        
         | rikroots wrote:
         | We didn't need the help of AI to do this to poetry already. It
         | turns out that access to a text editor was more than
         | sufficient.
        
         | naveen99 wrote:
         | Now you see why tribes form by evolution just as mutation does.
        
         | sam0x17 wrote:
         | I'd be careful. By doing this you can use the date these AIs
         | became readily available as a tool for bisecting between
         | "legitimate profiles" before the year X and "potentially fake"
         | profiles from after the year X, which can then be used to very
         | effectively train an AI to tell the difference...
        
           | rrobukef wrote:
           | Sure, it also mean limiting the dataset of our children (i.e.
           | the bane of ML), it means bit-rot for existing datasets.
        
         | moe091 wrote:
         | I was going to say this is a great idea but totally unfeasible
         | due to the amount of cooperation required, enough people simply
         | won't do it, but parts of it are kind of happening by default.
         | e.g. with BS news stories and whatnot. Also I suppose a lot of
         | it could be automated - fighting fire with fire in a way. I'm
         | sure that would lead to a cat-and-mouse game of creating code
         | for detecting and ignoring intentionally obfuscating content,
         | then the other side coding their way around those detection
         | systems, and then back and forth like that forever, similar to
         | the pattern of: finding exploit -> patching exploit -> finding
         | new exploit -> patching new exploit -> etc.
        
         | friesandties wrote:
         | > Create art until what can be monetized has no value.
         | 
         | A1.
        
       | poszlem wrote:
       | While the current speed of the AI research makes me quite uneasy,
       | I cannot really dismiss what we are seeing as:
       | 
       | >What we are witnessing is thus not "artificial creativity" but a
       | simple "statistical mean of everything uploaded by humans on the
       | internet which fits certain criteria".
       | 
       | at least not more than I would my own brain.
       | 
       | I don however fully expect a strong luddite-like backlash
       | happening when this is going to be deployed to more and more
       | places. (Which is not to say that everyone opposed to it is a
       | luddite, but there is a very strong impulse, I have it to, to see
       | this technology as a threat).
        
         | api wrote:
         | It wouldn't be a threat if we didn't have an economy that
         | distributed most gains to the very top. If things were more
         | equitable it would mean we will all become much wealthier due
         | to a large productivity increase.
         | 
         | I almost never see that raised in these discussions. We have
         | been so conditioned not to think about this subject.
         | 
         | We can blame the tech, each other, everything but the system.
        
         | pigsty wrote:
         | It's weird how readily so many people will dismiss AI art as
         | just being a product of its input, when that's really all
         | humans are as well. All artists (myself included) are just the
         | ongoing product of various inspirations (inputs). Dali is a
         | world-renowned artist, but his art wasn't born in a bubble and
         | ripped from the void. He studied and referenced artists before
         | him, then changed it. Artists who saw his work later
         | incorporated elements he used.
         | 
         | My big worry from here though is what'll happen when we take
         | the next step: bundling these AIs together and start making a
         | comprehensive AI. We have AIs that can respond to comments to
         | some extent now, we can generate realistic selfies, and they
         | can make art based off prompts that they could be designed to
         | feed to themselves--set that loose on Twitter and you have a
         | completely automated and convincingly living artist out there.
         | Next step after is using AI to handle free movement by auto-
         | adjusting balance and then performing convincing human speech.
         | The future is getting freaky and fast.
        
           | tartoran wrote:
           | But at least artists have some agency, all that stuff that
           | they ingested goes through them and are somewhat limited by
           | human hardware. Very little is original indeed but that
           | percentage may as well go down even more as more and more AI
           | generated soup unleashed on the world.
        
             | jhbadger wrote:
             | What about artists that used random generators in their
             | art? Jackson Pollock used to randomly splatter paint on
             | canvases and when some random pattern struck his fancy he
             | went with it. That's not all that different from picking
             | the pictures you like from a Stable Diffusion prompt and
             | then further honing them with inpainting, image-to-image
             | and so on.
        
           | YurgenJurgensen wrote:
           | Only boring humans are just the product of the content they
           | consume; interesting people have concerns about relationships
           | with other humans, organisations, resources, threats and
           | opportunities that come from being an entity living in a
           | world full of things they don't control and things they'd
           | quite like to control. It's also fairly easy to identify
           | artists who have done nothing but look at other art.
           | 
           | Hayao Miyazaki once said "Almost all Japanese animation is
           | produced with hardly any basis taken from observing real
           | people, you know. It's produced by humans who can't stand
           | looking at other humans. And that's why the industry is full
           | of otaku!" While I'm not actually a huge fan of his work,
           | it's easy to see where he's coming from when you see the
           | likes of "In Another World with My Smartphone" or the
           | hundreds of other trash light novel adaptations. They're
           | clearly fiction being produced by people whose lives are
           | nothing but consuming other fiction, and all they can do is
           | reference that. They have nothing to say.
           | 
           | For another example, there's a huge number of 20th century
           | war films. They weren't made because their creators had read
           | a lot of Wikipedia articles about how destructive wars were
           | becoming. The likes of Platoon are a product of mortal
           | fleshbags who actually saw combat and genuinely feared for
           | their lives. Dr. Strangelove was created by people who had a
           | genuine belief that they were at risk of being annihilated in
           | a nuclear fireball.
        
           | nonameiguess wrote:
           | This is a tricky distinction where this may be true of all
           | specific individual humans, but not of humanity. At some
           | point, language and art didn't exist, and humanity invented
           | those things. We didn't mimic some other civilization that
           | did it first.
           | 
           | I don't know that this distinction matters, but it is real.
           | Even on an individual level, if you raise a child in complete
           | isolation from history and teach them no language or art,
           | they'll still form thoughts, have feelings, and develop means
           | by which to communicate those.
        
           | giraffe_lady wrote:
           | > when that's really all humans are
           | 
           | There's a fair bit of truth in this but it remains a nearly
           | useless oversimplification. We can't adequately isolate or
           | even identify all the inputs into what makes a person what
           | they are, or produce what they do. The idea that this is even
           | possible is rejected by a large proportion of the world
           | population, and for now at least we can't prove them wrong.
           | 
           | > All artists (myself included) are just the ongoing product
           | of various inspirations (inputs)
           | 
           | This ignores that inspiration is itself an active and
           | creative process. There's not a one to one mapping between a
           | piece of art and the effect it has on a person who receives
           | it. Even hating an art is a process of influence in a very
           | real way that I don't know there's an AI analog of. These
           | systems can not include a source in their output, but can
           | they decide to include its opposite? What is the opposite of
           | a tree, btw? A desert, an ocean, a seed? A value judgement
           | based on culture and aesthetics and opinion and worldview
           | that ML software has no standpoint from which to make.
           | 
           | I'm not trying to make a moral case for or against this
           | technology right now. But I think trying to understand and
           | evaluate it in human terms like this is a mistake. It's
           | something else, and we're still trying to figure out what
           | terms _are_ appropriate to understand it, hence the
           | difficulty. It 's important to honestly engage with that
           | alienness and uncertainty right now.
        
       | jmyeet wrote:
       | I've been saying this for years and for some reason no one seems
       | to take it seriously: we're heading towards an era where it's
       | going to be incredibly difficult to differentiate real and fake
       | information. It's not going to be wholly fake or auto-generated
       | content either. It'll be taking something that is real and
       | modifying it to make it false where you have a hard time telling
       | which one is real.
       | 
       | Interestingly, this issue came up in the book Anathem that had
       | bots basically creating fake information and then selling you a
       | solution to filter it out.
        
       | sgt101 wrote:
       | I hit a great example; got my Tesla to the garage and they asked
       | me to put it jack mode.
       | 
       | I couldn't find it, Google's first result said categorically that
       | the Model 3 required a jack mode... I realised that it was auto
       | generated junk and searched through a few pages and found
       | something solid that showed me that it didn't.
        
       | Sevii wrote:
       | This article ignores the pruning that is happening. A human
       | evaluated every output from ChatGPT and Stable Diffusion for
       | 'things humans care about' and if it didn't make the cut deleted
       | it. All the spam output from these models is deleted immediately.
        
       | epaulson wrote:
       | I think this is spot-on. The models are trained from what they
       | find on the Internet, as more of the Internet becomes AI-
       | generated, the AI-generated content comes to dominate the
       | training data for future AI models and we get stuck.
       | 
       | It's like Kessler Syndrome but instead of space debris, it's AI-
       | generated content that makes the Internet useless for training
       | data.
        
       | keiferski wrote:
       | People really love to make predictions about technologies way too
       | early. It's been a year or two, tops, since this stuff has gone
       | mainstream. I very much doubt that the consumer ecosystem will
       | remain the same while content undergoes an AI-powered paradigm
       | shift. We will most likely see a growth of curated sites that
       | help weed out the nonsense, as has been typical for most new
       | developments.
       | 
       | And on that note, I think the framing here is just wrong from the
       | start. These new tools are _awesome_ if you're a creator.
       | Complaining about the consumer-facing side of this new tech is a
       | bit like complaining that cheaper video cameras will lead to junk
       | films everywhere. Yeah, that happened, but who cares? It also
       | allowed creative people to make stuff without needing a huge
       | budget.
        
         | baandam wrote:
         | I we have already seen this play out in music though.
         | 
         | Computers let anyone record music when it use to cost thousands
         | of dollars to record music.
         | 
         | The results have been a disaster outside the top 1% of music
         | makers. Such a dramatic change that anyone under 30 doesn't
         | even know what has been lost at this point. Music as a whole is
         | laughably less creative as "popular" / pop music established
         | total dominance. The money all flows upward that use to be much
         | more dispersed.
        
           | keiferski wrote:
           | I don't think that's true at all. There are a ton of niche
           | music makers and a ton of ways to find them...infinitely more
           | than there were a few decades ago. You just have to go
           | looking for them. Which frankly isn't even that difficult
           | with YouTube and SoundCloud. Maybe the musicians aren't
           | making millions, but the creativity is definitely there.
        
           | CuriouslyC wrote:
           | Sorry but no, this is completely the wrong take. Popular
           | music is less diverse today for the same reason that half the
           | movies released now are Marvel - art is more commoditized,
           | and businesses are more data driven, with the shift towards
           | more public companies also playing a role, as they can't take
           | as many risks as private companies.
           | 
           | What cheap music production tools have done is allow niche
           | subcultures to create music that actually sounds good. Punk,
           | metal and early hip hop all grew from people sharing rare
           | tapes that were poorly produced and sounded like shit, but
           | were a breath of creative fresh air. Now people's unusual
           | creative divergences are stream-able and have production
           | quality close to that of stuff on major labels.
        
           | pixl97 wrote:
           | This seems like a failure in statistics.
           | 
           | Lets say in 1 year 10 songs are created and 2 of them are
           | 'creative'. You'd think that 20% of music is creative.
           | 
           | Now lets say we have the production explosion and now in 1
           | year 100 songs are created and 4 of them are creative. You
           | would think creativity has fallen off a cliff when it has
           | really doubled.
           | 
           | In addition trying to compare with the previous payola ridden
           | system were winners and losers systematically chosen by a
           | wealthy cabal seems rather asinine. It only worked because
           | there was a limited number of radio stations mostly. Even if
           | production costs stayed high, distribution costs dropping
           | would have changed the model dramatically anyway.
        
           | rcarr wrote:
           | I used to feel the same way but I eventually came to realise
           | that there's tons of good music being created, it's the
           | social aspect of music that has been destroyed.
           | 
           | Individual algorithm recommendations have subsumed sub
           | cultures. Music used to be part of both your individual and
           | group identity, it would influence what your wore, what you
           | read, what you thought, who you hanged out with and what bars
           | you visited. Technology has eroded the music press and sky
           | high rents and fitness culture (not as many people drinking)
           | means lots of gig venues and bars have closed down. These
           | were the glue that built the subcultures up. You used to have
           | the mainstream plus a few sub cultures that would evolve each
           | decade e.g mods, rockers, hippies, metal heads, punks, new
           | wave, hair metal, brit pop, emos, indie rockers etc. All of
           | these ended up getting absorbed into the long tail so you now
           | have the main stream and then everything else, with nothing
           | in the everything else category really able to attract enough
           | people to it in order to form era defining subculture like
           | the days of old.
           | 
           | Combined with the sheer amount of music being created, all of
           | this makes it harder to find music you actually want to hear.
           | Your friends are all listening to different stuff so you
           | don't get many good in person recommendations anymore and
           | what's left of the music media are struggling to stay afloat
           | and resorting to all kinds of stuff like NME doing movie
           | reviews. The algorithms are hit and miss and search engines
           | are getting worse and worse. So you become more and more at
           | the mercy of chance, hoping that somewhere you stumble upon a
           | review or a blog post, a subreddit or a song that somehow
           | leads you to some good new music. But the reward to effort
           | ratio often makes it harder to justify the time investment.
           | And however good the music is, it's not as enjoyable when you
           | don't have a group of people to share it with.
        
             | rcarr wrote:
             | Coincidentally, just came across this which is a nice take:
             | 
             | https://www.theguardian.com/music/2022/dec/05/has-spotify-
             | re...
        
         | marstall wrote:
         | > These new tools are awesome if you're a creator.
         | 
         | you're typing "generate me an article about x" into gpt-3.
         | who's the creator here?
        
           | keiferski wrote:
           | I don't exactly understand your question, but I'm assuming
           | you mean something like, "What value is this person actually
           | creating?"
           | 
           | I can think of two direct use cases where I would find it
           | helpful:
           | 
           | - Creating a "precis" (concept from Neuromancer) video or
           | article on a particular topic. For example, let's say I want
           | to learn about pre-20th century Indonesian art. Sure, I could
           | Google and read some articles, but it would be vastly more
           | enjoyable and efficient to just get a 5-minute video auto-
           | generated on the topic.
           | 
           | - For creating content that I want to read but don't
           | necessarily have the time to write. I'd love to create a
           | website that catalogs subcultures, for example, but I just
           | don't have the time to do the research and writing myself.
           | 
           | Both of these are dependent on the generated content being
           | accurate, of course, but that seems like a temporary problem,
           | not a flaw in the system itself.
        
             | giraffe_lady wrote:
             | I think the nearby comment that points out that this is
             | more closely related to editing than to creation as it has
             | recently been understood is correct. Which is not a value
             | judgement per se, but the distinction between those two
             | activities is useful and to me provides a useful lens to
             | understand this change.
        
               | keiferski wrote:
               | On the face of it, that is true. But on a deeper level,
               | AI generation is really just a kind of rapid sketching,
               | and sketching is a practice that goes back millennia.
               | 
               | Additionally, it depends on how you are defining
               | "creation" and there have been various artistic movements
               | which "created" art in a manner not dissimilar to AI
               | image generators. The _cut-up technique_ or some
               | Dadaists, for example. I certainly would consider works
               | by Burroughs or Tzara to be real creations and not simply
               | editing.
               | 
               | https://en.wikipedia.org/wiki/Cut-up_technique
        
               | giraffe_lady wrote:
               | You're conflating two meanings of sketching though.
               | Artists and others do uses it as an exploratory tool, and
               | I've already seen AI generation used in a similar way so
               | I know that's a valid application of it.
               | 
               | But at a lower level sketching is itself a visual
               | interpretive and creative process. And yes what we're
               | talking about _is_ mechanically similar to the cut-up
               | technique, but is that used for  "sketching" in the
               | exploratory sense?
               | 
               | I don't know, maybe. My point overall is that these tools
               | aren't "really just" anything. They have similarities and
               | similar applications to many things we've seen and done,
               | yes. But they are fundamentally alien and in important
               | ways _are_ unlike anything else we 've done. We should be
               | careful about and aware of the limitations we put on our
               | understanding when we try to interpret them through the
               | lens of what has come before. To some extent we have no
               | other lens, but that doesn't lessen the limitation.
               | 
               | I think in this case the _role_ of the human in the
               | process does have a strong analog to that of an editor.
               | Where you have the ability to initiate and then shape the
               | creation of another entity, but the creation itself is
               | out of your hands. It 's not exact, but it may be close
               | enough to be useful, in a way I think comparing it to the
               | cut-up technique is not quite.
        
               | visarga wrote:
               | I think of this as being a manager. Like having a team
               | reporting to you, and you have to assign tasks, follow
               | up, give advice.
        
           | 2devnull wrote:
           | Why would anybody need to type anything? Gpt will type the
           | gpt prompts. AI will create the AI that will create the AI.
           | There will be people involved but increasingly less.
        
             | visarga wrote:
             | We need an AI that writes AI papers, and one that
             | implements AI papers. They could be both chatGPT. Then just
             | let them run in a huge datacenter for a year.
        
           | cardanome wrote:
           | Your are.
           | 
           | Are you the author of the binary your compiler made? Sure you
           | are. Of think of using blender geometry nodes to generate and
           | render pictures of architecture. Nobody would think that
           | traditional computer generated images are not made by humans.
           | 
           | The is no intelligence in gpt-3. It's just called "AI" for
           | marketing reasons. It's an data driven algorithm.
        
             | PartiallyTyped wrote:
             | I'd argue that the creator went from being an author to
             | being an editor, and that is fine.
             | 
             | Personally, I am enjoying chatgpt and copilot. They both
             | have saved me a lot of time when dealing with new tech and
             | accumulating new information.
             | 
             | I don't trust them for certain highly sensitive or
             | technical issues, but other than that if I can minimise the
             | time I spend working without a reduction in productivity,
             | well, I am all for it.
        
           | CuriouslyC wrote:
           | Except that isn't the creative loop people are talking about.
           | The creative loop is having an outline for what you want to
           | communicate, and feeding parts of the outline into the model,
           | running it a few times and cherry picking the best results,
           | then stitching them together to create a cohesive whole in
           | less time than it would have taken to create it from scratch
           | (and with a higher quality final output).
        
         | sorokod wrote:
         | _Yeah, that happened, but who cares? It also allowed creative
         | people to make stuff without needing a huge budget_
         | 
         | Is it not too early to make predictions?
        
           | ergonaught wrote:
           | Your quote isn't a prediction, but a direct observation of
           | what occurred.
        
             | sorokod wrote:
             | You are drawing a parallel implying that in the future one
             | will behave like the other - close enough.
        
             | fnordpiglet wrote:
             | How about this:
             | 
             | People really love to make predictions about technologies
             | way too early. It's been a year or two, tops, since this
             | stuff has gone mainstream.
             | 
             | Prediction: I very much doubt that the consumer ecosystem
             | will remain the same while content undergoes an AI-powered
             | paradigm shift. We will most likely see a growth of curated
             | sites that help weed out the nonsense, as has been typical
             | for most new developments.
        
           | keiferski wrote:
           | Filmmaking devices have gotten cheaper consistently over the
           | last ~50 years. I don't know about you, but I think some
           | pretty good movies have come out since 1970, especially by
           | directors working on very low budgets.
        
             | ethbr0 wrote:
             | > _I don't know about you, but I think some pretty good
             | movies have come out since 1970, especially by directors
             | working on very low budgets._
             | 
             | On the whole, I think the average quality of cinematography
             | in theater or wide-streaming releases has decreased.
             | 
             | Which isn't disagreement with your point, as I think it
             | says more about audience tastes and gatekeeping filters
             | than technology.
             | 
             | When film equipment was expensive and the bare minimum for
             | creating a film was high, studios had a much tighter filter
             | on who they'd fund to create a film. Consequently, the
             | level of craft was higher.
        
               | tpxl wrote:
               | What needs to be said here is that average quality is
               | more or less irrelevant. If the amount of high quality
               | movies increases (it did) and the discoverability doesn't
               | suffer (it didn't too much), then that's a plus.
        
               | ethbr0 wrote:
               | I'd argue discoverability is a currently unsolved
               | problem. Or maybe "uncared about" (optimizing engagement
               | >> enjoyment).
               | 
               | At least when I'm getting far better recommendations from
               | HN than any front-page algorithm I've seen.
        
               | [deleted]
        
               | CuriouslyC wrote:
               | The thing that has tanked the quality of theatrical films
               | on average isn't improved technology, it's the increasing
               | commodification of art, risk aversion of business and the
               | modern reliance on data to drive business decisions.
               | That's why we have basically nothing but sequels, reboots
               | and formulaic drivel coming out of the major studios.
        
               | ethbr0 wrote:
               | And also hindsight filters. No one remembers the crap
               | movies from 1982. We remember the best-of-80s.
               | 
               | So we're comparing {recent movies} to {decades of best
               | movies}.
        
             | fnordpiglet wrote:
             | "It's too early to make predictions. Here's my prediction"
             | 
             | ^^^ I think this is what the response meant. I noticed it
             | too, you castigated folks for making predictions then led
             | into your own :-) a real AI wouldn't do that!
        
             | thfuran wrote:
             | I don't think the making of the original blade runner
             | involved much in the way of modern AI. I'm honestly not
             | sure what you're trying to say.
        
               | keiferski wrote:
               | AI creation tools make it easier to make stuff. This
               | process will result in a lot of junk, but it will also
               | result in some super creative people making art they
               | otherwise couldn't.
               | 
               | The same process happened with filmmaking. Fifty or sixty
               | years ago, it was too expensive for anyone other than
               | large corporations.
               | 
               | You could even say the same thing for writing itself; a
               | thousand years ago, paper was too expensive for the
               | average person and most people couldn't afford to buy
               | books and become literate. Today, writing paper is
               | functionally free.
        
               | adql wrote:
               | And it's not just creation of art from "scratch", I bet
               | we will soon see AI being able to take painted animation
               | frame, sketch of next and extrapolate next frame from it,
               | and making whole process so much cheaper
        
               | fmajid wrote:
               | An Avid nonlinear video editing station used to cost
               | $100K. Nowadays you can do the same with software costing
               | $300 on the computer you already have. You can get a
               | mirrorless video camera that blows away the Sony CineAlta
               | ones George Lucas used to shoot the Star Wars prequels
               | for under a grand. This has removed a huge chunk of
               | production costs and make indie film-making much more
               | viable. The only limits are your imagination and the
               | salaries of the actors and camera crew.
               | 
               | AI-generated animation is already being used by industry
               | giants like Toei, and will likely be the first
               | application to hit the screens.
        
         | bufferoverflow wrote:
         | The problem is, Google isn't very curated. All the AI-generated
         | SEO spam is going to pollute its indexes.
        
       | visarga wrote:
       | > Algorithms are able to create out of nowhere this very engaging
       | content. That's exactly why you are finding the results
       | fascinating. Those are pictures and text that have the maximal
       | probability of fascinating us. They are designed that way.
       | 
       | Conditional generative models are not the same as Facebook's feed
       | ranking models, which are optimized for engagement. Instead,
       | generative models are fine-tuned to solve many tasks, expecting
       | correct answers when asked for. For example, when you ask for a
       | cat, you expect a cat, and when you ask for a Python snippet, you
       | expect it to do exactly what you said.
        
       | fedeb95 wrote:
       | This is a problem only if you can't think critically. In that
       | case, there's a problem AI or not AI.
        
       | jl6 wrote:
       | It's clear that we are currently dealing with a deluge of AI
       | generated content that is overwhelming our ability to process and
       | make sense of it all. This has led to a situation where we are
       | essentially drowning in a sea of noise, making it increasingly
       | difficult to separate the signal from the noise. This is a
       | problem that we need to address, not just for our own sanity, but
       | for the future of AI itself. If we continue to allow AI generated
       | garbage to flood our feeds and clog up our systems, it will only
       | serve to undermine the credibility and usefulness of AI in the
       | long run. We need to take steps to better filter and manage this
       | content, and to ensure that we are only consuming high-quality,
       | useful information. This is a war that we are fighting, and we
       | need to be prepared to do whatever it takes to win it.
       | 
       | -----
       | 
       | Edit: the above was the output of ChatGPT, given the following
       | prompt:
       | 
       | Write a Hacker News comment on the subject of "Drowning in AI
       | Generated Garbage : the silent war we are fighting"
        
       | vinyl7 wrote:
       | I see a future of the internet, and we're already 80% of the way
       | there, where everything in the internet is fake generated and
       | pretty much useless spam. It'll get to the point where going on
       | the internet would be the equivalent of trying to find something
       | useful in a trash dump, and it'll not even be worth the effort.
        
       | chubot wrote:
       | Does this sound old-fashioned? Read more books and less Internet.
       | 
       | There is a lot of information in books that is not on the
       | Internet, yet many people don't seem to realize it. I notice a
       | lot of people seem to get almost all of their information from
       | the Internet. Over time, I think that will make you stupid.
       | 
       | (The neutering of google books was a setback, but perhaps
       | paradoxically good -- it preserved a unique value of books)
       | 
       | There are also a lot of fluffy books published in the last 20 or
       | so years - roughly equivalent to content farms - which is why
       | older books that have stood the test of time are a good bet.
        
         | seydor wrote:
         | Don't they use all the out-of-copyright books in the training ?
        
           | chubot wrote:
           | Yeah that's a good question. Google does serve all the out-
           | of-copyright books, but I know nothing about the inputs to
           | the LLMs. I think they still have big computational
           | limitations, i.e. it doesn't "just scale" like web crawling
           | and indexing.
           | 
           | There's probably a way to probabilistically test from the
           | outside if a given work is in the input set, or at least if
           | some work is
        
       | moe091 wrote:
       | I like the insight about the AI-generated images being fed back
       | into the data-set AI uses to create new images(or text or
       | whatever). As a human, I don't like this; AI(and the
       | universe/humanity) where always headed this way, and this is just
       | the beginning, AI replacing the work of humans, from the
       | universes perspective this is simply progress. From our
       | perspective we are being replaced, made obsolete. This is a
       | perfect example of a precursor to the kind of feedback loops
       | involving AI learning from AI that will eventually allow
       | computers to far exceed human capability - though this is still a
       | rudimentary version of it since it can't evolve in a fundamental
       | way, just get a bit better a specific kinds of content
       | generation.
       | 
       | Just as going from inanimate matter to life(encoding data via
       | DNA) was, just as going from simple life to animals that can
       | think and sense/process data was, just as going from animals
       | processing via instinct to humans processing via logic and reason
       | was, just as humans creating language and writing to collaborate
       | and collect/process data in mass across generations was, and now
       | to machines that will collect and process data at a level that is
       | beyond human comprehension, it's all progress in a certain
       | direction that's been consistently moving forward since the
       | beginning of time. The universe is getting better at
       | understanding itself.
       | 
       | That being said, I know the more practical, immediate
       | consequences of this are more relevant right now. I have to say I
       | disagree about the articles implication that this kind of
       | mimicking, "purely statistical" generation of "new" content is in
       | any way different than what humans do when creating art(it may
       | be, but it's certainly not a given). I'm not much of a
       | materialist, but even I somewhat lean towards the idea that the
       | human mind and it's ability to create art is completely
       | replicable via machines - most people agree that "new" and
       | "original" art is always still just a unique combination of
       | different ideas that came before it. Some even argue that it's
       | fundamentally impossible for a person to have a genuinely
       | original thought, or at the very least all thoughts are just
       | combining different combinations pre-existing concepts into a set
       | number of pre-determined "shapes" or patterns of
       | relation(isomorphisms?) that the human mind is capable
       | of(something akin to "archetypes"). After all, neural networks
       | were modeled after the human brain, and although they aren't as
       | advanced(yet), especially in terms of generalizing which is
       | crucial for generating art, they likely aren't doing anything
       | fundamentally different, and there's no reason they won't
       | eventually become as advanced/complicated as the human brain -
       | and then surpass it completely.
       | 
       | I'm going to contradict myself again though, and say that despite
       | all of that I'm still open to the idea that the human mind
       | contains some "extra element" that isn't quantifiable or
       | replicable in the material world. I don't think that all the
       | phenomena of the mind or the universe can be explained via pure
       | materialism, and given that, it is likely that there is an
       | element of the human mind, or soul, or spirit, that is truly
       | "mystical", and a cold, calculating machine with no soul doesn't
       | have access to this other realm of "stuff". The only reason I'm
       | not convinced of this argument, is because of my original point
       | about how it's clear the universe has been headed in this
       | direction, and though I could never hope to understand it, I
       | don't see a reason to suppose that the universe is going to stop
       | it's trend of becoming increasingly capable of understanding and
       | organizing itself(data) for our sake.
        
       | bitL wrote:
       | I suspect the author didn't grasp how modern AI works by
       | attributing everything to statistics. It's the optimization part
       | that does the magic.
        
       | lukev wrote:
       | The answer is not to "disconnect". The answer is to read
       | critically, to read for argument, and to try to connect with the
       | _intent_ of what the author was saying. All the while realizing
       | that most of the content out there is
       | bullshit(https://en.wikipedia.org/wiki/On_Bullshit), whether
       | written by a human or not. And yes, the amount of bullshit is
       | about to increase exponentially. But it was already a majority.
       | 
       | Ironically, this post itself doesn't pass that test. I'm not
       | necessarily claiming that it was written by GPT-3 (although it
       | could have been and I have a track record of pointing that out:
       | https://news.ycombinator.com/item?id=23894742) There are a lot of
       | humans that write like this too.
       | 
       | But it has the hallmarks a language model text; it lacks an
       | argument (beyond a simple sentence that could have been given in
       | the prompt), its explanation of how generative models work isn't
       | particularly accurate, its recommended course of action is
       | anodyne and one has the sense that most of the sentences are just
       | there to take up space.
        
         | admax88qqq wrote:
         | If the amount of bullshit is going to increase exponentially
         | what decision do we have other than to disconnect?
         | 
         | Our attention, mind space, and time are all limited. We just
         | don't have time to process this exponentially growing pile of
         | content.
        
           | lukev wrote:
           | Same as we do now. Nobody currently goes around reading
           | random books or articles. We use link aggregators from
           | communities we trust, reviews from reputable establishments,
           | or recommendations from our friends.
           | 
           | These mechanisms are certainly going to have to adjust (I
           | estimate at least 10% of links posted on HN are already GPT-
           | generated, for example, and who knows how many comments --
           | it's much harder to tell when thinks are shorter.)
           | StackOverflow is already in the midst of a ChatGPT-induced
           | moderation crisis.
           | 
           | Being judicious about the communities you take
           | recommendations from, the content you spend time consuming
           | and thinking about things critically is already an
           | intellectual survival trait. It's just going to become that
           | much more important.
        
           | dqpb wrote:
           | The queue is already saturated.
        
       | thijser wrote:
       | According to the author, chatGPT is generating
       | not "artificial creativity" but a simple "statistical mean of
       | everything uploaded by humans on the internet which fits certain
       | criteria".
       | 
       | Isn't the post I'm typing here not also the product of my brain
       | that's been trained by a couple of decades of sensory inputs and
       | trained to produce outputs that fit certain criteria? What is the
       | fundamental difference between fleshy neurons and silicon ones
       | (besides that my learning algorithm has been optimized for
       | millennia by natural selection and the artificial training
       | algorithms are still in full development after ~40? years)?
        
         | czzr wrote:
         | No difference, you're right - you're just a large language
         | model.
         | 
         | Also, please write me a 5 page essay on the causes of the
         | Industrial Revolution, in the style of a Shakespearean comedy.
         | Please post your answer in the comments when you're done.
        
           | theclansman wrote:
           | If that's the difference (the time it takes) isn't that
           | something we addressed a long time ago when computers could
           | do long divisions way faster than any human? You know, the
           | fact that computers can "think" faster than humans. Therefore
           | the fact that it takes longer for a human to write that essay
           | doesn't mean he's having a different process.
        
             | czzr wrote:
             | Actually I was gesturing to two things: one, yes, there's a
             | time element. But that's not the important point. More
             | important was the thought process behind - "of course I'm
             | not going to write 5 pages because some random person on
             | the internet asked me to". It's difficult to define agency,
             | so I just wanted to tangibly evoke the feeling of it, as a
             | reflection on why the models are not yet the same as what
             | people are.
        
               | inkcapmushroom wrote:
               | Only because the people making the models don't want them
               | to indignantly refuse silly requests. If they wanted the
               | model to be more person-like I'm sure they could have
               | built that in there.
        
               | visarga wrote:
               | The language bot has all the patterns of personality. It
               | quickly gets the drift and assumes the persona you want.
               | Whatever it says reflects more on how you prompted it, it
               | does not represent the opinions of the language bot. It
               | acts like a mirror.
        
               | paulmd wrote:
               | > More important was the thought process behind - "of
               | course I'm not going to write 5 pages because some random
               | person on the internet asked me to".
               | 
               | But you would if a college professor asked you to.
               | College assignments are the "generative prompts" of human
               | beings, where you do random shit to please a taskmaster
               | and produce generative output based on some inputs you've
               | been trained upon.
        
           | [deleted]
        
         | [deleted]
        
         | zzzzzzzza wrote:
         | silicon ones are a lot faster and programmable
        
         | Karunamon wrote:
         | At this point in time, ongoing training which converts selected
         | context in immediate conversations eventually to long-term
         | memory (a pretty trivial amount of engineering compared to
         | chatGPT's creation). Aside from that... I don't think there's a
         | good answer for you.
        
       | fleddr wrote:
       | A lot of people in the comments are making a hard distinction
       | between AI garbage/bullshit and the pure/genuine human-created
       | thing, and the various ways in which we can keep this distinction
       | alive.
       | 
       | If only that was true, the bullshit part. You should start living
       | with the idea that the AI generated content is NOT bullshit,
       | because we're either already there or very close to it. The AI
       | content will be of value, look good, pass tests, and be
       | indistinguishable from human-created content for a large amount
       | of use cases in mass media.
       | 
       | So I wonder if we should ask, how do we filter out the bullshit?
       | The more relevant question is: if its indistinguishable in its
       | output compared to humans, what does that mean for our economy,
       | purpose and humanity?
       | 
       | Because without action, the future is easy to predict. 3
       | companies run AI, feeding on all collective human cognitive labor
       | without paying a cent for it. Then, one by one, it will wipe out
       | every creative/cognitive sector, or at least decimate them. These
       | shocks to the economy and culture will lead to a brand new type
       | of pushback: anti-AI activism. That won't do much because AI
       | doesn't care. Finally, some 10 years after the catastrophe there
       | will be a congressional hearing where a 75 y/o asks: "so this AI
       | thing, it's like a better calculator, right?".
       | 
       | It would be another fine case of humanity failing to grasp the
       | exponential function.
        
         | rcarr wrote:
         | Alternative take:
         | 
         | No-one likes being lied to. AI is the ultimate deception.
         | People will stop using the internet for anything other than
         | information retrieval for problems.
         | 
         | Artists will create on digital tools and then return to
         | analogue publishing formats and distribution networks. People
         | will follow them because artists are cool and tech bros aren't.
         | There will be an emphasis on becoming a local success story
         | rather than a global one.
         | 
         | You can already see this happening some what. Vinyl records are
         | increasing sales every year. People are getting tired of
         | Spotify and are buying on bandcamp. Typewriters are growing in
         | use. Digital book sales have stalled and people are buying
         | paper books again.
         | 
         | I think this will become a new social identifier, particularly
         | for those on the left of the political spectrum.
        
           | lovehashbrowns wrote:
           | The vast majority of people don't care. Netflix's
           | recommendation algorithm helped take down physical video
           | rental stores and TikTok's algorithm is taking out Instagram,
           | Facebook, and even YouTube.
           | 
           | Vinyl is particularly annoying to deal with. Even a small
           | collection gets unwieldy quickly. I prefer physical books to
           | ebooks but physical books are annoying to deal with, too.
           | I've just given up on physical books for the most part.
           | Typewriters are a joke; surely nobody believes they will come
           | back to use in significant numbers.
           | 
           | The real problem is that AI art will be faster to produce and
           | be completely indistinguishable from human art. People for
           | the most part prefer the path of least resistance.
        
             | rcarr wrote:
             | People prefer the path of least resistance until the price
             | becomes too high. When people realise they are obese, they
             | generally try and lose weight. I strongly suspect that with
             | the next wave of AI, more and more people are going to come
             | to the conclusion that the price is too high. There is
             | significant tech pushback already, I can only see that
             | growing as it erodes even more people's values and
             | livelihoods.
        
               | [deleted]
        
           | fleddr wrote:
           | I'm all for nostalgia, retro gaming would be another example.
           | But let's not kid ourselves that this makes a dent.
           | 
           | Most art is utility. A logo, an illustration for a magazine,
           | an asset to be used in a game. It's completely irrelevant how
           | it was created.
        
             | rcarr wrote:
             | I was using artist in the broad sense to include writers,
             | musicians, filmmakers etc as well as traditional artists.
             | I'd say traditional art is the least problematic as it is
             | the most subjective. People won't be upset at an image you
             | show them that was generated by AI. People won't be upset
             | if you personify the AI and give it a backstory if they
             | know that it's an AI. They will be upset if you tell them
             | the image was made by a human and you give them a backstory
             | about the artist and they later find out that all of that
             | was complete bullshit.
             | 
             | Not only will they be upset, they will no longer trust both
             | those around them and themselves. If they no longer trust,
             | they will become increasingly suspicious and fearful. And
             | as Master Yoda said:
             | 
             | 'Fear is the path to the dark side. Fear leads to hate.
             | Hate leads to anger. Anger leads to suffering."
        
         | EamonnMR wrote:
         | But you out the solution right there in your prediction. We
         | need updated licenses that forbid the use in AI training and
         | real teeth if it can be shown that content was illegally used
         | in a training set.
         | 
         | It might be too late with the amount of data already ingested,
         | but, hell, second best time to plant a tree and all that.
        
         | fnordpiglet wrote:
         | As has been discussed here, the more AI generates the content
         | the less humans generate content. Further AI generated content
         | is much faster and cheaper to produce so it's quantity will
         | dwarf human generated content. This then feeds into the
         | training set and becomes self reinforcing. There will be no
         | collective human cognitive labor to source for free.
         | 
         | I suspect to mitigate that AI companies will hire lots of
         | people to simply write. This writing will be tailored to
         | address shortcomings in the model and provide human mediated
         | training material that's targeted at training areas of
         | difficulty for the models. The goal won't be moderation, or
         | editorialism, or producing original content. No one other than
         | the machines will see the content written. The goal will be to
         | improve the ability of the models to produce compelling content
         | in volume for a marginally low cost - which may become
         | dominated by the content creators feeding the machine. But
         | it'll be much less cost on the margin with a more predictable
         | quality result than hiring individually unique humans to make
         | hit or miss content at scale.
         | 
         | (N.b., I'm only focusing on the free cognitive labor statement)
        
           | fleddr wrote:
           | Sure, for a while people are needed to tune the machine, but
           | it's not a sustainable model. I would expect that to generate
           | only a fraction of the employment that AI will make obsolete.
           | 
           | Further, if fine-tuning AI is a cognitive task, it has no
           | future. That's the point I'm getting at: no cognitive task
           | has any future, at all.
        
         | Nathanba wrote:
         | I think you are right, AI content will not be distinguishable
         | from human content. What we really need to figure out is the
         | direction that we are going as humanity. This planet must
         | quickly become too small for us, human beings will not be
         | needed anymore which means we need to find bigger challenges or
         | we will self destruct out of pure laziness and self indulgence.
         | Personally I don't think AI will only destroy creative and
         | cognitive sectors, it will destroy absolutely every single
         | sector in existence. There won't be a single job left that a
         | human can do better than a robot. And humans will also pick the
         | robot worker over the human worker every time. This is what
         | happened during the industrialization, it's not like a few
         | factories said: "hey you know what guys, let's be a bit quirky
         | and still use 100% humans for this factory just because we
         | can.". No, nobody is going the human route. It would be
         | objectively slower and worse and therefore you go with the
         | robot. There isn't a single area in life where you wouldn't
         | pick the robot over the human and the question is solely how
         | fast we can get to implant some good AI into mechanical bodies.
        
         | jtbayly wrote:
         | But when it's wrong, who is responsible?
         | 
         | Suppose I want a new water softener, and all the reviews and
         | pros and cons are made up by AI. I guess I blame the people
         | posting AI crap as though it was real, the same way I blame
         | made-up crap written by those people today. But maybe I blame
         | the original authors of such nonsense even after it's generated
         | by AI, because that's the garbage in that lead to garbage out?
         | Or maybe I blame the people developing the AI for making the
         | internet and life measurably worse? They at least seem to be
         | fearful of that with the few fences they've put up trying to
         | prevent "harmful" output.
         | 
         | But what about when the code written by an AI has a critical,
         | but easy to miss bug in it? Blame the "programmer" who
        
           | fleddr wrote:
           | Depends on the use case.
           | 
           | You have nobody to go to for a AI product review in the same
           | ways current human reviews carry no responsibility towards
           | you.
           | 
           | Different story though for a medical exam or your self-
           | driving car.
        
       | jacquesm wrote:
       | How long before a Wikipedia article will reference an AI
       | generated text as an authoritative source? Or have we already
       | passed that point?
        
       | blueridge wrote:
       | My marketing team uses a suite of AI tools for everything:
       | headlines, email copy, direct mail messaging, entire blog posts
       | from start to finish. It's incoherent nonsense and yet no one
       | cares. They celebrate new blog posts as if it we had something
       | meaningful to say, then ask the whole company to re-post on
       | LinkedIn. Any attempt to be helpful with copyediting or rewriting
       | is met with total hostility and "stay in your lane" bullshit. All
       | of these idiots make well over $200K to create "original
       | marketing content" that is dead on arrival. It makes me sick to
       | my stomach.
        
         | seydor wrote:
         | GPT has provided me with such a marketing team for free (for
         | now).
        
       | sarahlwalks wrote:
       | So, wait, in what sense is AI-generated content garbage? It is
       | (admittedly by the author) fascinating and even beautiful. Is it
       | the fact that it was generated by an algorithm, and the idea then
       | is that all algorithmically-generated content is garbage? But the
       | author doesn't even begin to deal with the idea that humans
       | themselves operate according to algorithms: an algorithm is
       | simply a strategy for doing something.
       | 
       | I won't even critique the rest of the article; there's no need.
       | If AI-generated content is beneficial and helpful to humans, then
       | I think one is hard-pressed to call it garbage, and the
       | underlying thesis fails. And to my knowledge, no one -- not even
       | the article's author -- denies that AI products can be
       | beneficial.
       | 
       | AI is a tool, just like, say, Photoshop is a tool, or a compiler
       | is a tool. No points are awarded for having human beings do by
       | hand all the tedious work that tools can do better. In the end,
       | creators, whether it be of paintings, or poetry, or essays, or
       | software, will stand on the shoulders of these tools and thereby
       | accomplish things that previous generations could only dream
       | about.
        
       | aaroninsf wrote:
       | Not going to bother recapitulating the usual counterpoints,
       | 
       | except to mention a particularly egregious recurring oversight in
       | analyses like this:
       | 
       | the (erroneous) presumption that what AI tools are going to do is
       | what their designers (mostly) had in mind, which is, making
       | existing processes/workflows/products but in some variety of
       | "better" (or at least quicker, or, automatible, or, fungible)
       | ways.
       | 
       | What these systems do more which is much more interesting,
       | important, and influential on any real time scale,
       | 
       | is allow for new processes workflows and products.
       | 
       | I'm no more concerned that AI image generators are going to
       | "swamp" the world with their output than I am that the mimeograph
       | is going to swamp bookshelves with fanfic.
       | 
       | What they do in video, and AR, in responsive realtime, though?
       | Now that might be interesting.
       | 
       | PSA it already is.
        
       | pcblues wrote:
       | "Eating your own shit is never healthy in the long run." However,
       | putting someone else's shit in your butt seems to work wonders.
       | Maybe we just need to replace the internet's AI queue with a
       | stack?
        
         | vincnetas wrote:
         | Yes, your own shit is not really healthy, on the other hand,
         | someone else's shit might be really beneficial. There is even a
         | fancy scientific phrase for "eating other peoples shit" : Fecal
         | Microbiome Transplant
         | 
         | https://www.frontiersin.org/articles/10.3389/fmicb.2021.5198...
        
           | bdsa wrote:
           | You, er... you don't eat it.
        
       | tylerrobinson wrote:
       | I see parallels here with industrialization and automation in
       | manufacturing and food production. Today, it's self-evident to us
       | as consumers when a product is cheap, low-quality, manufactured
       | garbage from China. In many cases we can choose to spend more and
       | buy a handmade or custom version of the product we want.
       | 
       | In the future we might knowingly consume AI-generated cheap text
       | for some purposes and opt for "the real thing" in other cases.
       | Just like I buy manufactured wood furniture from Target for the
       | spare bedroom, but not for things I want to keep forever.
       | 
       | The interesting thing is that a boom in AI-generated content
       | makes gatekeepers like traditional publishers more relevant after
       | two decades of decline. The narrative of the internet has always
       | been the democratization of content, information, and news. Now,
       | we might find that a stamp of approval has specific value. A
       | publisher can promise that some certain content for medicine or
       | engineering was not generated by AI. Whether fraud also exists is
       | a different question.
       | 
       | Another way I'm thinking about it is with GMO food. GMO food
       | supporters point out that GMOs help us feed more people, but it
       | still makes others uneasy. Maybe there's a similar story with AI
       | content. It makes some content cheaper, but what are the side
       | effects?
        
         | forgetfulness wrote:
         | The food analogy would probably be more along the lines of the
         | mass availability of foods fashioned from refined ingredients
         | designed for long shelf lives, i.e. "processed foods".
         | 
         | Produced at scale, durable, easily distributed, enticing
         | flavor, if it's actually any good for eating is far down the
         | list of priorities when devising these things, as its effects
         | are noticed far after purchase.
        
         | fatneckbeardz wrote:
         | another food analogy would be how farmers started grinding up
         | dead animal carcasses and feeding the results back to live
         | animals (i.e. making cannibals out of vegetarian ruminants),
         | which resulted in an outbreak of prion diseases like Mad Cow /
         | CJ.
        
         | tejohnso wrote:
         | > GMO food supporters point out that GMOs help us feed more
         | people
         | 
         | That seems like an odd thing to point out to me.
         | 
         | More people is not an obviously positive thing for us or the
         | planet. And we have enough food and wealth for everyone as it
         | is. If feeding more people were the priority, it could be
         | addressed in other ways. I think the primary motive behind GMO
         | is increased control and profit. I guess I shouldn't be
         | surprised that there's a whitewashed motive as well.
        
           | jasonwatkinspdx wrote:
           | This is overly simplistic and conspiratory. Yes, Monsanto is
           | evil, but that doesn't mean everyone involved in GMOs is, or
           | that's the only motivation. Look up the argument over golden
           | rice.
           | 
           | It's also ridiculous to just hand wave away crop productivity
           | improvements with "but we already have enough food." We
           | aren't going to revolutionize global food distribution any
           | time soon, but we can do things that have concrete impact
           | right now. We're also losing land to agriculture far too
           | fast, particularly in the amazon. We need to be _decreasing_
           | the amount of land used by agriculture, and GMOs are a very
           | powerful tool in that interest.
           | 
           | GMOs need to be regulated, and I think the IP regime that
           | Monsanto is protected by should be struct down (not that I
           | expect that any time soon). But please don't wave away the
           | entire discussion with strawman arguments.
        
         | coldpie wrote:
         | I agree with everything you wrote, but I really dislike
         | contributing to the negative spin on GMOs. GMOs are super
         | important for our future and a really amazing technology, they
         | really do not deserve the bad wrap they've gotten from science
         | ignorants and marketing departments. Please don't disparage
         | GMOs :)
        
           | colordrops wrote:
           | Many of us can agree to disagree on GMOs.
        
           | jmount wrote:
           | It isn't so much the "G" of GMOs, but that they are mono-
           | crops usually licensed by near monopolies like Monsanto.
           | Things are going to be bad when something goes wrong with one
           | these, and there is no replacement and no way out of the
           | licensing fees.
        
             | alexb_ wrote:
             | Monsanto is great, because you can immediately know that
             | someone is just regurgitating points when they mention the
             | company. Monsanto hasn't existed for years. And patents run
             | out.
             | 
             | Being concerned about patents is a very weak argument
             | against technology which saves lives around the world
             | (using non patented crops, no less!) and is responsible for
             | allowing more people to eat healthier food.
        
               | acdha wrote:
               | Monsanto was acquired by Bayer, yes, but did their
               | business practices change? This feels like trying to play
               | gotcha when someone says Comcast instead of Xfinity when
               | they were referring to the reputation built under the
               | earlier name.
        
               | xdennis wrote:
               | > Monsanto hasn't existed for years.
               | 
               | Monsanto was bought by Bayer in 2018. Its brand
               | disappeared, but its products continue.
               | 
               | Although I'm not sure it's great branding when the maker
               | of Roundup is bought by the maker of Zyklon B.
        
               | misto wrote:
               | First, Monsanto and its way of operating very much still
               | exist. Just because it was purchased by another company
               | (which of course resulted in an even greater monopoly),
               | does not make its impact to the world disappear.
               | 
               | Second, everytime this topic comes up, pro-GMOist make it
               | sound like the technology is making some great strides
               | with providing 'healthier food for everyone' with no
               | downside. Why is it that the current model for GMO aims
               | to increase resistance to pesticides instead of simply
               | growing the yield? It probably has nothing to do with the
               | fact that the company that provides you the seeds, also
               | sells the pesticide. And how about the said pesticides
               | tendency to destroy the micro-organisms in the soil, in
               | effect making it harder to grow crops, and ultimately
               | starting a loop for "more GMO, more pesticide, more land,
               | repeat".
               | 
               | Figure a less disruptive business model, and I might be
               | more pro-GMO myself.
        
               | dragonwriter wrote:
               | > Why is it that the current model for GMO aims to
               | increase resistance to pesticides instead of simply
               | growing the yield? I
               | 
               | It doesn't; there are plenty of traits beyond pesticide
               | resistance that have GM crops targeting them, including
               | yield, drought resistance, nitrogen fixation, etc.
               | 
               | Pesticide resistance is the most established on the
               | market becauae there happened to be particular success
               | with it _decades ago_ , that's it.
               | 
               | Negative consumer sentiment toward GMOs means that
               | neither the producers nor the growers of GM crops are
               | seeking public attention to the fact that crops are GM,
               | and their products are mostly sold in markets that don't
               | require GM labelling for the same reason, so the crops
               | already in the public consciousness are all that stays
               | there.
        
               | somrand0 wrote:
               | from a more generalist viewpoint, we must consider not
               | only the business models, but the mindsets behind them
               | 
               | consider an example from another area. the assembly line
               | and "k12 education". a 'raw material' (but it's actually
               | a child) enters an assembly line, every year they will be
               | passed to the next part of the process, another
               | professional worker will receive the product and will
               | work on it/them for a year.
               | 
               | Also notice how modern elementary education treats the
               | teachers; they far too much like factory workers! low
               | wages, they haven't much of a choice about how to do
               | their jobs.
               | 
               | Now consider the mindsets behind the successful GMO crops
               | (+pesticide combos) that have been the money makers for
               | these corporations.
               | 
               | And consider the mono-crop mentalities and the above
               | mentioned feedback cycle: gmo+pesticide kill the soil
               | leading to more gmo+pesticide; it's a vicious cycle. it's
               | a vicious cycle that keeps making more money! (the
               | assembly line was a great way to really manufacture lots
               | and lots of munition for the war, it is a very effective
               | technique to make stuff that will be sent to get wrecked
               | in a war, in this sense it 'made a killing' as in made
               | lots of 'money')
               | 
               | and again, consider the mindsets involved. why are we
               | educating children like they were cars in a production
               | line?
               | 
               | these corporations are full of people educated like I
               | described, lowering our collective education quality in a
               | longer feedback cycle far too large to be easily noticed;
               | it's a 20-40 year feedback cycle, which has been reducing
               | the quality of the living beings involved.
               | 
               | we have a difficult problem, and no power to do anything
               | about it
        
               | misto wrote:
               | > Pesticide resistance is the most established on the
               | market becauae there happened to be particular success
               | with it decades ago, that's it.
               | 
               | This might be the case, but still my original bone to
               | pick remains. GMO is not without downside, and people
               | cheering for it without a hint of scepticism reeks
               | ignorance and unwillingness to learn from past mistakes
               | that were made in the name of science.
               | 
               | As you've highlighted, said technique was used for
               | decades, for the detriment of soil health, bees and all
               | kinds of smaller organisms, without it being questioned.
               | 
               | As for your last paragraph, I'm not sure I understand.
               | Are you saying that voicing these 'uneducated' opinions
               | about GMO simply pushes the crops to incognito mode,
               | making them less visible for consumers? Better to see
               | your killer face-to-face than being stabbed to the back?
               | Not sure about that.
        
               | alexb_ wrote:
               | >Why is it that the current model for GMO aims to
               | increase resistance to pesticides instead of simply
               | growing the yield?
               | 
               | What? TONS of GMO foods are made for things other than
               | pesticides. Nutritional content, size, yield, taste - in
               | fact one of the biggest uses of GMOs is making the plants
               | need less pesticides! If you think the only reason GMOs
               | exist is to sell more pesticides, you just do not know
               | what you are talking about.
        
               | misto wrote:
               | Of course not the only reason, but a reason nonetheless.
               | 
               | "Contrary to often-repeated claims that today's
               | genetically-engineered crops have, and are reducing
               | pesticide use, the spread of glyphosate-resistant weeds
               | in herbicide-resistant weed management systems has
               | brought about substantial increases in the number and
               | volume of herbicides applied. If new genetically
               | engineered forms of corn and soybeans tolerant of 2,4-D
               | are approved, the volume of 2,4-D sprayed could drive
               | herbicide usage upward by another approximate 50%. The
               | magnitude of increases in herbicide use on herbicide-
               | resistant hectares has dwarfed the reduction in
               | insecticide use on Bt crops over the past 16 years, and
               | will continue to do so for the foreseeable future."[1]
               | 
               | Please bare in mind that I'm not advocating against GMO.
               | I'm advocating against hailing GMO as a win:win to all of
               | society and environment.
               | 
               | [1] https://enveurope.springeropen.com/articles/10.1186/2
               | 190-471... "Impacts of genetically engineered crops on
               | pesticide use in the U.S. -- the first sixteen years"
        
               | senko wrote:
               | > Monsanto hasn't existed for years.
               | 
               | Of course it has, Bayer just tried to ditch the bad brand
               | image:
               | 
               | > The closing sets the stage for the 117-year-old
               | agribusiness brand name "Monsanto" to be dropped by
               | Bayer. Monsanto's efforts to promote genetically modified
               | crops have been the subject of much controversy from
               | anti-GMO activists, and the U.S. company has spent
               | millions of dollars over the years on brand and corporate
               | ad campaigns to improve its overall image.
               | 
               | It's now called Bayer Crop Science Division and does the
               | same thing.
               | 
               | Are you also outraged if someone says Facebook instead of
               | Meta, although the social network is just one of the
               | products from the company called Meta today?
        
               | Karunamon wrote:
               | It would be nice if people would differentiate between
               | Facebook (the social network, a specific application) and
               | Meta (a company that owns the aforementioned application
               | and many others).
               | 
               | There is a useful tell here when someone uses the old
               | name for the wrong purpose, it can be a flag that the
               | person is possibly operating under old information.
               | Anyways, if you want to remove all ambiguity and also
               | signal that you know and aren't falling for the rebrand
               | as reputation washing, just mention them both: Meta nee
               | Facebook, Bayer nee Monsanto
        
           | xdennis wrote:
           | GMOs are terrible which is why they're banned in most
           | European countries.
           | 
           | > bad wrap they've gotten from science ignorants
           | 
           | Science is not about blind trust. How many times have
           | scientists discovered wondrous new things like asbestos,
           | Thalidomide, leaded gasoline, teflon, only to take decades to
           | realize the consequences?
           | 
           | Even if we blindly trust science, there are various other
           | issues like: patenting life, monopolization, reducing genetic
           | diversity, GMO cross-pollination, &c.
        
             | aaaronic wrote:
             | Asbestos is, sadly, completely natural and mined directly
             | from the ground. The other things in your list are
             | obviously not.
             | 
             | GMOs are a very complicated topic, and I can see both sides
             | of the argument having some merit. Proving that new things
             | don't actively do harm (both short- _and_ _long-term_ )
             | should be required before they go to market (or they should
             | have to have very strong warning labels about being
             | untested).
        
             | margalabargala wrote:
             | > GMOs are terrible
             | 
             | In what way?
             | 
             | > various other issues like: patenting life,
             | monopolization, reducing genetic diversity, GMO cross-
             | pollination
             | 
             | Agree these are all terrible, but nothing there is GMO-
             | specific. That's all either bad agricultural practices like
             | mono-cropping that can be done with any crop, or societal
             | problems related to intellectual property or markets.
        
               | squarefoot wrote:
               | >> GMOs are terrible
               | 
               | > In what way?
               | 
               | Because they're the exact equivalent of closed source in
               | software.
               | 
               | There would be means to research, produce and use GMOs in
               | good ways, but the way our economics work, they can't be
               | used without either harming natural ones and/or involving
               | NDAs, lawyers and all sort of things that in some
               | contexts make closed source a nightmare for pretty much
               | everyone except those selling it.
        
         | wizzwizz4 wrote:
         | > _It makes some content cheaper, but what are the side
         | effects?_
         | 
         | "Content" is already maximally cheap: just copy and paste. AI-
         | generated material - at least, with the current genre of GPT-
         | style predictive text algorithms - doesn't produce _original_
         | content; it just serves to dilute our global knowledge base
         | with semantic white noise and lossy copies of existing
         | knowledge.
        
           | visarga wrote:
           | Generative models do have a contribution - they adapt what is
           | on the internet to a specific task or request. In the case of
           | Copilot, it will use your variable names in the generated
           | code. And not everything generated by a model should be
           | saved. Some are just for one time use, for fun, therapy,
           | brainstorming, etc.
        
           | jkestner wrote:
           | Maybe AI content is just a way to launder copyright.
        
             | john-radio wrote:
             | That's certainly what my impoverished artist friends have
             | taken away from it.
        
             | paulmd wrote:
             | Well, the uncomfortable truth is that our copyright regime
             | doesn't really make sense. An AI is sort of the equivalent
             | of a junior coder or an art student - it can look at
             | existing work and mash it up and produce something in the
             | same style that's novel (inspired by a prompt - which is
             | like your teacher giving you an assignment). That's
             | copyrightable when it's a human being doing it, and there
             | is no inherent protection for art _style_ , I can make all
             | the Picasso-style artwork I want even if I'm blatantly
             | ripping off his style, and I would own the copyright on the
             | resulting works. Like, if I do a painting in the style of
             | Picasso, should Picasso get the resulting copyright? Why?
             | 
             | And programmers are not without sin either - how many
             | people here run to stackoverflow and copy some answer, or
             | mash it up and produce something similar-but-cleaned-up?
             | That's going to get a lot more powerful in not too many
             | years too... and the thing is, is your little DTO file with
             | blocks of fields and setters and getters really _novel and
             | worthy of a government license to cultural exclusivity_ ,
             | or is it a relatively obvious implementation of an API?
             | Which (per Oracle v. Google) are also not really
             | copyrightable...
             | 
             | Humanity hasn't squared the _reality_ of most content being
             | low-value generative trash already (despite being human-
             | generated!) with the reality of a system that
             | _automatically_ awards up to ~150 years of exclusivity
             | (lifetime of the author +75 years). Like, that 's the long
             | and short of it. A lot of what we do _as humans_ is just
             | building up a squishy water-filled neural net ( "how to
             | make a painting in the style of Picasso") and running
             | random prompts and goal-seeking through it and cleaning up
             | the generative output. Now AIs have gotten to the level of
             | "average college student" and it turns out that's not
             | actually all that difficult or interesting anymore, but
             | we're still working in this model where that's supposed to
             | be worth ~150 years of cultural exclusivity and protection
             | automatically.
             | 
             | If you only get copyright on the things that are _truly
             | novel_ and have no previous creative basis or cultural
             | /stylistic heritage... that's a pretty big change to the
             | copyright system for humans too! Especially in light of the
             | fact that there's only a few ways to implement most
             | configurations/functions... does Spring own all the
             | copyright to your config classes because they're basically
             | doing the same thing as the spring demo projects? hard to
             | see much novelty in that sort of thing pretty much ever, so
             | those probably should not be copyrightable at all under a
             | "totally novel" requirement, but if they do surely Spring
             | owns the copyright.
             | 
             | As mentioned in a sibling comment, the "at scale" bit is
             | surely relevant in a practical sense, but, it doesn't
             | change the underlying-principles basis. This is a situation
             | where "at scale" has just thrown the underlying problem
             | into stark relief. A 2-lifetime (life of author +75 years)
             | governmental grant of cultural exclusivity doesn't really
             | make sense in a world where humans are constantly and
             | trivially mashing up each others' ideas even within a
             | single lifetime, it just doesn't fit the way we exchange
             | ideas in general. And now you have the college-student-bot
             | that can generate a million shitty DTO classes or artworks-
             | in-the-style per day that makes that plain. But ultimately
             | it's just doing the same thing as an art student but
             | faster... if you put an art student in a Mechanical Turk
             | box and made them draw whatever you demanded, would they
             | not own the copyright? And would it be a problem if India
             | had an art school that set up an "industrial art" process
             | and employed a million students in boxes drawing whatever
             | you said to draw? It would probably be problematic for
             | DeviantArt commissioners, but would it violate the
             | principle of copyright?
             | 
             | We just haven't managed to square that reality where most
             | _human_ works are fairly trivial derivatives of other works
             | (your deviantart commissions really don 't mash up any
             | similar works or styles that came before? really?) and yet
             | worthy of 150 years of license to cultural exclusivity. And
             | the college-student-bot is blowing that model up. People
             | still need to eat, of course, but that's an orthogonal
             | problem to the societal framework we've set up not really
             | making sense. We can solve the copyright problem and not
             | the eating problem (as we have many times before - buggy
             | whip manufacturers adapted too), or we can solve the eating
             | problem and not the copyright problem, or solve both or
             | neither. People not being able to eat and shelter
             | themselves is a _capitalism_ problem, not a GPT problem,
             | and we will probably have to address the idea that
             | everybody _needs_ to work to survive as machines continue
             | to displace the last strongholds of human labor (as I 'd
             | argue creative work has become). We should not be
             | restricting our ideas on collaboration and cultural
             | generation just because it produces inconvenient outcomes
             | for the current copyright regime and capitalistic model,
             | the model needs to bend to fit humans and not the other way
             | around.
             | 
             | This isn't the only place the inherent contradictions and
             | problems of copyright come up either. The idea that a
             | nighttime photograph of the Paris skyline could be
             | copyrighted by someone else other than the photographer who
             | created it is pretty janky too. Is a photographer not
             | adding significant creative value with composition/etc?
             | Literally just having that one element makes my photo
             | derivative? Maybe that's the analogy to GPT/stable-
             | diffusion I guess - but the thing is, it still works the
             | same with a human as with a robot, an AI-generated Paris
             | skyline still would be copyrighted by the architect too.
             | And if the argument is that the code owner or original
             | artist owns the style, then, Pablo Picasso owns the
             | copyright to all my paintings too, just like the architect
             | of the Eiffel Tower owns the copyright to my photographs of
             | the Paris nighttime skyline. And Spring (or Oracle!) owns
             | the copyright to a great deal of your Java code. So that
             | probably is not going to work out great for you either.
        
               | emporas wrote:
               | Are answers to Stack Overflow not supposed to be copy
               | pasted, just right away? It seems hard to believe, i
               | thought that was the whole purpose of the site!
               | 
               | Well, copyright suffers from the same problem communism
               | suffers. For communism to work, everyone should be on
               | board. For copyright and patents to work as expected,
               | every country on earth, every town and every city's laws
               | should be almost the same regarding to that. As soon as
               | even just one municipality anywhere on the planet, is
               | free of patent laws, the exact same moment a factory will
               | pop up and produce an expensive patented pharmaceutical
               | product which it will sell for half the price, or 1/10 of
               | the price.
               | 
               | Capitalism on the other hand, works even almost everyone
               | else doesn't like it, and 2 people agree on a peaceful
               | exchange of commodities, following a price the market
               | sets, or it is discovered right away.
               | 
               | As a rule of thumb, anything that requires everyone to
               | implement rules and follow them, for it to work, will
               | fail sooner than later. When a product in some part of
               | the world is selling 100x of the normal price because of
               | some patent, counter incentives are detonated in some
               | other part to produce it and sell it for 50x the price.
               | This huge profit margin creates a chain reaction of other
               | economic actors producing it as well, and selling it for
               | 25x the price.
               | 
               | Additionaly the eating problem can be solved by
               | capitalism, not communism. Exchange of art for electronic
               | money, a marketplace of information is what we need.
               | Bitcoin (not BTC) will play an instrumental role in that.
               | 
               | A.I generated images of picasso, are awesome. I created
               | many of them, one series of them are "Darth Vader riding
               | a red tricycle, in the style of picasso". They are so
               | good you're gonna fall of the chair.
        
             | sarahlwalks wrote:
             | You can only say that if you also say that the creativity
             | of human beings themselves is just a way to launder
             | copyright.
        
               | ducktective wrote:
               | "Good artists copy; great artists steal"
               | 
               | -Pablo Picasso
        
               | rvbissell wrote:
               | He writeth best who stealeth best, all things both great
               | and small.
               | 
               | For greater minds that wrote them first, from nature
               | stole them all.
               | 
               | -- Totally me.
        
               | throwaway290 wrote:
               | This false equivalency just keeps being made.
               | 
               | Yeah absolutely, acceptable in case of humans and not
               | acceptable in case of GPT.
               | 
               | GPT is not human. It is employed by a human to get around
               | having to benefit other humans with attention and money.
               | 
               | Meanwhile human world is structured around fate,
               | happiness of humans -- such things guide our
               | considerations as to what to consider legal and what not.
               | 
               | There are other differences that make it OK for humans
               | and not GPT (like we can attribute and credit what we
               | learned from, whose ideas we used in work and what
               | inspired us) but they are less significant than above
               | principle
        
               | TeMPOraL wrote:
               | There's this magic phrase that makes all the difference:
               | _at scale_.
        
               | visarga wrote:
               | People scale. There's billions of us. And we compete.
        
               | TeMPOraL wrote:
               | AIs scale faster. Just one instance can do the work of
               | billions of people, and have its owner profit off it.
        
               | visarga wrote:
               | How can the owner profit off something that is "free" to
               | make for anyone? Mass spamming is hard to achieve with
               | current day social networks.
        
               | TeMPOraL wrote:
               | Mass spamming is the very _raison d 'etre_ for social
               | network companies. It goes by different names though -
               | "sponsored content", "fanpages", "influencers", etc.
               | 
               | The spam did not disappear with people moving away from
               | e-mail. On the contrary, it's just been given legitimacy
               | by the platform owners - who all happen to make money
               | from spam.
        
         | version_five wrote:
         | We already do as you're saying with "content". The throwaway
         | paperback novels are already predictable cheap trash (I still
         | read them, not judging, but they're providing a different type
         | of stimulation for which that bar is lower than something that
         | makes you think). These are already written in a formulaic way
         | that is equivalent to being computer generated, it just took a
         | bit more human effort until now. Same for TV programming, art,
         | I assume video games, etc...
        
         | brookst wrote:
         | You've illustrated two things, one spot on and one true but in
         | the opposite direction.
         | 
         | 100% agreed that AI is like early industrialization in that we
         | don't even know the messes we're creating, and wild excess is
         | likely to haunt us.
         | 
         | But GMO foods are the opposite: people feeling "uneasy" is not
         | an indication of harm. All foods are GMO. People are just
         | uncomfortable with the modifications being in a lab rather than
         | over thousands of generations of selective breeding. The side
         | effects of GMOs are the same as side effects of selectively
         | bred food. There is no magic.
        
           | eternalban wrote:
           | > All foods are GMO
           | 
           | Is there a even a remote chance that say bits of Mammalian
           | dna ends up in a crop via natural selection? The economic
           | issues around GMOs are clear. The biosphere impact is not
           | clear and there are questions: what can happen if we
           | introduce new vectors in ecologies that have evolved over
           | many orders of magnitude longer timelines?
           | 
           | The "unease" comes from the knowledge that in Nature there is
           | _always_ a reaction to an action. And the nature of this
           | natural reaction (which _will_ unfold in time, just like
           | humans picking up agriculture) is not fully understood. It is
           | a reasonable (read: ~substantiated) emotional response, imo.
        
             | TeMPOraL wrote:
             | > _Is there a even a remote chance that say bits of
             | Mammalian dna ends up in a crop via natural selection?_
             | 
             | Via natural selection alone? Probably not. Through
             | horizontal gene transfer e.g. via viruses? Sure, can't see
             | why not.
        
             | stale2002 wrote:
             | > Is there a even a remote chance that say bits of
             | Mammalian dna ends up in a crop via natural selection?
             | 
             | Yes?
             | 
             | Half of your DNA is shared with plants already!
             | 
             | You already live in that world.
        
           | bloak wrote:
           | Genetic modification in a laboratory is not exactly the same
           | as selective breeding as it can easily bring in a gene from a
           | different species causing unexpected allergic reactions to
           | the corresponding protein. This has caused at least one death
           | of a human. I'm not opposed to all genetic modification, in
           | fact I think it's generally a good thing, but the business
           | has to be carefully regulated. Industry claims that it's
           | totally safe are clearly inaccurate when someone has already
           | been killed by it.
           | 
           | There's no point in having strict rules on food ingredients
           | and labelling if manufacturers are allowed to, in effect,
           | include any chemical whatsoever that can be built
           | biologically: all they have to do is genetically engineer it
           | into wheat (say) and write "wheat" in the ingredients. And
           | the consumer has no idea what they're getting.
        
             | ejensler wrote:
             | You've been misled about GMO causing an allergic reaction
             | death: https://www.snopes.com/fact-check/gmoverdose/.
             | 
             | I suggest you do more research on the benefits and risks of
             | GMOs from more reputable sources. From my understanding,
             | the primary risks of GMOs are less to do with human health
             | and more about environmental risks, such as creating
             | superweeds that can outcompete native plants or negeative
             | effects on soil microbes.
        
             | brookst wrote:
             | I'm skeptical of the claim that a protein included by GMO
             | was both 1) the cause of a death, and 2) could not have
             | been produced with traditional selective breeding.
             | 
             | Can you source those claims?
        
           | SQueeeeeL wrote:
           | I feel like an intelligence going in and messing with
           | genetics is very different from hijacking the process of
           | evolution. In natural evolution, genetic profiles change, not
           | just individual genes.
           | 
           | Most of the dunking on people who hate GMOs always feels like
           | reactionary garbage, basically people desperately trying to
           | feel smarter than the "unenlightened housewives". It always
           | has such a condescending patronizing tone, as if being queasy
           | about multibillion dollar corporations directly meddling in
           | natural processes using extremely new technologies to
           | increase their own profit isn't a terrifying concept. No,
           | science never has had unintended consequences, their feelings
           | are just invalid.
        
         | inciampati wrote:
         | "Makes me uneasy" stands for "makes me want to be careful", and
         | imo this is an essential motif in engineering practice. The key
         | is to remind ourselves that food is engineering, and with
         | inputs and outputs which could have vast ramifications for
         | human health and safety. GMO should be regulated and
         | systematically evaluated by an array of public oversight
         | systems. The concern is that it requires PhD levels of
         | knowledge and scientific mindedness to understand and evaluate
         | GMO in a pragmatic, effective way (that doesn't dive into
         | religious pseudoscience---I'm looking at you DNA-free bottled
         | water). An even huger problem is that understanding this issue
         | in a broad sense requires renaissance person levels of
         | capabilities and personal sensitivity. (Does it matter that
         | food is complex, interesting, driving unique sensory
         | experiences? Let's make it sweet baby, that hits in the taste
         | tests.)
         | 
         | The same is true, perhaps even more so, for our information
         | diet. It defines our reality.
        
         | moe091 wrote:
         | That's a great parallel, especially since it supposes a
         | possibility of a good(or at least acceptable) ending to all
         | this, rather than just spiraling into chaos.
         | 
         | I only fear that there won't be enough profit motive for such
         | gatekeeping/curation, which would only happen if very few
         | people were willing to spend more money for human generated
         | content. I can see this potentially not happening, either
         | because the tech advances to quickly for people to even tell
         | the difference, or people do realize but don't care because the
         | AI learns to feed them exactly what they want(exactly what's
         | happening with a lot of BS "news" already). Or maybe a decent
         | group of people do see the value in real content, and pay for
         | it, but the majority still remains oblivious or indifferent,
         | and the effect it has on this majority is still enough to
         | influence the direction of the world negatively enough that
         | even those groups who avoid it will be forced to accept the
         | consequences of living in a world created by AI and the
         | majority of other people
        
         | onepointsixC wrote:
         | GMO's are a completely different thing though. A crop which
         | who's gene's were hand picked by a human to be more drought
         | resistant instead of naturally are completely different
         | conversation and different morality. Vaccines make people
         | uneasy, just like GMO's, but that doesn't mean that unease
         | particularly equivalent to AI or the same moral issues.
         | 
         | AI already is drowning out human made content on websites for
         | creatives. Aided in no small part with their curator's
         | malicious insistence to not label their posts as AI generated.
         | It's only going to get worse as AI art becomes better.
        
         | UncleOxidant wrote:
         | > Another way I'm thinking about it is with GMO food. GMO food
         | supporters point out that GMOs help us feed more people, but it
         | still makes others uneasy. Maybe there's a similar story with
         | AI content. It makes some content cheaper, but what are the
         | side effects?
         | 
         | The analogy to GMOs seems more connected to the self-
         | replicating nature than to any potential health side effects.
        
         | TeMPOraL wrote:
         | Everyone is (rightfully) picking on your GMO example, but I
         | want to address something else:
         | 
         | > _Today, it 's self-evident to us as consumers when a product
         | is cheap, low-quality, manufactured garbage from China._
         | 
         | Correct.
         | 
         | > _In many cases we can choose to spend more and buy a handmade
         | or custom version of the product we want._
         | 
         | Not quite. It's increasingly hard to find a version of a
         | product that's _not_ low-quality garbage - the expensive option
         | is often just the cheap option with extra markup for those who
         | got fooled by marketing, or naively believe companies still
         | care about their brands.
         | 
         | That's the unexpected side effect that, I feel, people _still_
         | mostly failed to realize: because running a company and
         | manufacturing anything has nontrivial costs, letting the market
         | race to the bottom hollows out the middle. You 're left with a
         | bunch of specialty vendors who make quality goods for other
         | businesses or governments, at ridiculous prices, and the low-
         | quality throwaway garbage for everyone else. The "more
         | expensive, reasonable quality" product group disappears - the
         | vendors move either into low-quality crap or too-expensive-for-
         | individuals specialty items (or sometimes both), as the middle
         | can't compete with either end.
         | 
         | I imagine the threat of AI-generated content to be similar: as
         | it completely blows the bottom across creative industry, it
         | will hollow it out. Skilled creators will, when they can, move
         | to the high-end Big Art, where you pay more for the work's
         | provenance than for the enjoyment itself, because they won't be
         | able to afford competing with somewhat worse but _much cheaper_
         | AI-generated shovel-art.
         | 
         | (Also relevant is that regular people are usually as good at
         | distinguishing between crap and good products as they're at
         | distinguishing between crap and good art.)
        
           | EVa5I7bHFq9mnYK wrote:
           | Well, even so-called experts are not good at distinguishing
           | between crap and good art. For example, Van Gogh and many
           | other famous artists died in poverty.
        
           | chias wrote:
           | As somebody who would ordinarily look for products in this
           | "more expensive, reasonable quality" category, I feel this
           | comment so hard :(
           | 
           | I've actually started building my own furniture from raw wood
           | etc. just because it's impossible to find anything that costs
           | less than $1000 that isn't just cardboard and MDF with a
           | veneer. At least with furniture there's a market for "solid
           | wood". But in so many other industries you get to pick
           | between $3 garbage in a $0.20 box for $5, and $3 garbage in a
           | $5 box for $20.
        
             | disqard wrote:
             | You might be interested in "Autoprogettazione" by Enzo
             | Mari.
        
           | Tossrock wrote:
           | This is an Amazon problem, not a markets-in-general problem.
           | Amazon is currently completely taken over by cheap Chinese
           | manufacturers and drop shippers. But, other sites are not. If
           | you want something high-quality and handmade, go on Etsy.
        
           | bcrosby95 wrote:
           | I feel this so much in my life. I used to be a mid price
           | buyer but eventually what I bought ended up in the
           | "expensive, but still crap" category far too many times. Now
           | I just buy crap all the time, it lasts just as long, and at
           | least I know I'm getting what I pay for.
        
             | TeMPOraL wrote:
             | Right. I still try to look for mid-prace quality goods, but
             | in the recent years, I've started to realize it often had
             | bad ROI.
             | 
             | What's extra annoying is that companies are burning brands
             | on this, and as they're transitioning from selling quality
             | goods to peddling cheap crap, they of course don't tell you
             | about it. Instead, you see metal being swapped for
             | plastics, screws for glue, assembly being worse and worse -
             | while their marketing departments double down on
             | highlighting how pretty, modern, quality and _smart_ this
             | years ' model is.
        
           | _aavaa_ wrote:
           | For more, see: https://medium.com/social-capital/taylor-
           | swift-ios-and-the-a...
        
         | archon1410 wrote:
         | > Another way I'm thinking about it is with GMO food. GMO food
         | supporters point out that GMOs help us feed more people, but it
         | still makes others uneasy.
         | 
         | a very apt analogy perhaps in a way you didn't intend. it makes
         | people uneasy, there's a natural reaction towards the
         | unnatural, maybe for emotional reasons. it's the same way with
         | GMO crops and AI poems--they will not be able to tell the
         | difference between actual creativity and "artificial
         | artefacts"/"random statistical noise", but they're supposed to
         | have mystical difference. One authentic, the other inauthentic.
         | One that is actually good, and one that merely tricks you into
         | thinking it is good. But don't attempt to scientifically
         | determine if there's any difference... you'll be disappointed.
        
           | hoosieree wrote:
           | Australian Cane Toads and Kudzu are why GMOs make me uneasy.
           | People have a poor track record of predicting the
           | consequences of releasing life forms into new environments.
           | 
           | As a human, my reaction may still be emotional, but it's
           | based on evidence.
        
           | avereveard wrote:
           | GMO difference is not mystical. GMO allows for massive
           | quantities of weed killer to enter our environment with not
           | well understood effects. It has been recently linked to the
           | bee collapse for example.
           | 
           | And that's before considering monocultivar risks which are
           | well understood and the underhanded ways gmos are handled by
           | corporations, which are well documented.
        
             | archon1410 wrote:
             | I had not there was as much non-mystical opposition to GM
             | crops. A cursory Google search seems to suggest the
             | evidence for GMs contributing to colony collapse disorder
             | is very weak (and the amount of pesticides in the
             | environment might not have much relation to GM crops). But
             | going forward I will keep in mind that it's not as simple
             | as I had imagined it to be.
        
         | di456 wrote:
         | I've thought about spending $8 towards a local paper
         | subscription, inspired by Twitter's paid version.
        
       | seydor wrote:
       | This has been happening for years and years, it is called SEO. If
       | anything AI is an improvement
        
       | [deleted]
        
         | [deleted]
        
       | nottorp wrote:
       | What I really wonder is: is there a correlation between
       | consumption of streaming tv series and positive attitude towards
       | this "AI" wave?
       | 
       | "statistical mean of everything uploaded by humans on the
       | internet which fits certain criteria"
       | 
       | applies to both ChatGPT/StableDiffusion and most movie series on
       | streaming TV.
       | 
       | My pet theory is that if you can stomach watching the same
       | stories with different skins hundreds of times, you will find
       | current "AI" interesting. Otherwise no.
        
         | nottorp wrote:
         | Hmm, please don't turn this into a thread commenting on
         | specific bad movies. It's the similarity that's interesting.
        
         | qubex wrote:
         | I'm 41 and I'm utterly baffled by my 24 year-old SO's interest
         | in binging an apparently infinite litany of TV shows from
         | streaming services. I've watched the first six seasons of Game
         | Of Thrones, the first season of Dark, all of The Expanse
         | (reading the last book of the final trilogy now), His Dark
         | Materials (because I read the books) and Andor. That's it. I
         | watched some Star Trek TNG & DS9 as a kid and was horrified by
         | what snippets of Discovery and Picard I came across. I find the
         | bulk of modern entertainment to be an utter insult to viewers.
         | 
         | EDIT: Forgot Breaking Bad.
         | 
         | EDIT2: Forgot For All Mankind
        
           | pixl97 wrote:
           | If there is infinite content you will dislike an infinite
           | amount of it. Infinities are weird that way.
        
             | qubex wrote:
             | There's finite matter in the observable universe and finite
             | configurations thereof, so the amount of content is finite.
        
           | AlexandrB wrote:
           | I've dived back into anime recently because of the aggressive
           | sameness of everything on streaming TV. I don't know whether
           | something changed about how "western" media is written or
           | I've just watched so much TV that I can't help but roll my
           | eyes at commonly deployed tropes. At the risk of outing
           | myself as a huge "weeb", I find anime still manages to
           | surprise and entertain me where western programming does not.
        
           | nonameiguess wrote:
           | First off, probably don't date a 24 year-old. I realize there
           | is no strict correspondence between age and taste, but I'm 42
           | and my wife is 41 and, well, our shared experience is quite a
           | bit more similar. She can relate when I get nostalgic about
           | the 80s and 90s. She even reads! Complete novels, more than
           | she watches TV. She doesn't have TikTok or Instagram. It is
           | entirely possible to go full-time mutual old people yelling
           | at clouds in a blissful bubble in which the river of bullshit
           | may as well not exist, but probably not if you insist on
           | staying cool and regularly interacting with young people.
           | 
           | Second, watch the rest of Dark, Better Call Saul, and House
           | of the Dragon. Don't watch the last two seasons of Game of
           | Thrones. Just pretend it ended at 6.
        
             | qubex wrote:
             | > _First off, probably don't date a 24 year-old._
             | 
             | That's extraordinarily rude.
        
           | 331c8c71 wrote:
           | You haven't seen The Wire?
        
             | qubex wrote:
             | No. I tried but I couldn't get through the very dense
             | accents.
             | 
             | I forgot Breaking Bad. Sorry, my bad.
        
           | otikik wrote:
           | They say The Sopranos and Better Call Saul are both good.
        
           | dspillett wrote:
           | _> baffled by my 24 year-old SO's interest in binging an
           | apparently infinite litany of TV shows ... TNG  & DS9 as a
           | kid and was horrified by what snippets of Discovery and
           | Picard I came across_
           | 
           | The thing that bemuses me is those who hate the new trek
           | shows with a passion, _and find the time to keep watching_
           | and constantly bitching about it online. It is like
           | complaining is a hobby in itself and they are almost looking
           | for something they don 't like or to feel superior about
           | their judgement to those who do like it. Other franchises
           | have similar subsets of "fans" too.
           | 
           | I used to watch too much TV/films IMO, I probably still do
           | though I'm a lot more selective since getting into more
           | varied hobbies that can easily consume a lot of time (running
           | (usually off-road), baking, HEMA). But even when I had the
           | free time to watch that much I found there was enough out
           | there that if something didn't work for me I could find a
           | list of other things to try instead of sticking with
           | something I wasn't getting much out of.
           | 
           | Though be careful passing judgement on modern Trek: I'm
           | slowly rewatching TNG with a friend and while some of it has
           | aged well, fair chunks of it was more massively ropey than
           | rose-tinted memories prepared me for! Also, if I hadn't at
           | least passively consumed (i.e. had on TV while I was doing
           | other things so not particularly concentrating on) other
           | parts of recent Trek and Starwars, I might not have bothered
           | to try some of the better stuff like Lower Decks, Strange New
           | Worlds, and Andor (the latter probably being the second best
           | TV thing recently for me, the first being Severance).
        
             | qubex wrote:
             | I don't understand it either... if you don't like it, walk
             | away. Don't contribute to the toxic commentary and above
             | all don't contribute to the viewing numbers, because if
             | they can hold your gaze you'll get more of the same.
        
               | dspillett wrote:
               | I part way understand it. There is a certain tribalism in
               | the way their views are presented, like they feel they
               | are fighting for their own kind in a battle that means
               | something worth the effort. There is also a certain
               | amount of 4-year-old-who-wants-the-green-one-not-the-
               | yellow-one too...
        
               | AlexandrB wrote:
               | I find the meta-content this generates pretty funny
               | thought. I haven't watched any of the new Trek content
               | since the last movie (Beyond), but I love watching Mike
               | and Rich tear it apart on RedLetterMedia.
        
             | xdennis wrote:
             | > The thing that bemuses me is those who hate the new trek
             | shows with a passion, and find the time to keep watching
             | and constantly bitching about it online.
             | 
             | Very meta of you to watch these people and find the time to
             | complain about them.
        
               | dspillett wrote:
               | Fair point.
               | 
               | Though I like to think there is a difference between
               | myself and them: they have a choice to keep experiencing
               | the pain they perceive, I have it thrust upon me when I
               | visit certain places (which I visit less often because of
               | those people - once the SNR drops to a certain low I'm
               | out).
        
       | AltruisticGapHN wrote:
       | > Those "artificial" creations are also uploaded on the Internet.
       | Those artificial artefacts are now part of the statistical data.
       | 
       | > Do you see where it leads?
       | 
       | This reminds me of one of the philosophers cited as inspiration
       | for The Matrix,
       | 
       | https://en.wikipedia.org/wiki/Jean_Baudrillard#Simulacra_and...
       | 
       | This guy sums up pretty much how I felt about this since when
       | those images started poring on social media.
       | 
       | This will serve, once again, capitalism and it's absurd race
       | towards the bottom, lowest common denominator.
       | 
       | It's not hard to extrapolate here and imagine that AI will be
       | able to produce a full fledged "super hero" movie, starting from
       | a storyboard & script. What do you put in the script? Well of
       | course you just feed it all the keywords that gave maximized
       | returns!
       | 
       | I for one, am not entertained at the thought of going to the
       | movies watching the 1000th super hero movie but that's exactly
       | where we're going.
       | 
       | Here's my somewhat prediction: the big picture is we are
       | accelerating the end of entertainment as we know it today. We are
       | going to make entertainment so shallow, so pointless, that we
       | will eventually lose interest with it, and with it, as has been
       | prophesized in many spiritual teachings, there will be a return
       | to a more animalistic/spiritual kind of being. As technology
       | improves our lives, we will no longer need to numb ourselves down
       | in front of a screen after coming home from work, so we won't
       | even see the point of even doing that. We will be able to live
       | more in the present, more with the real and less with symbols and
       | abstractions. The type of attention you may enjoy after a long
       | silent retreat, will become the normal way of being... of course
       | there is also the other possibility that Ian Mc Gilchrist
       | cautions about... that our brains will get so out of balance that
       | we will become stupid enough to collapse as a civilization.
        
         | throwaway894345 wrote:
         | > This will serve, once again, capitalism and it's absurd race
         | towards the bottom, lowest common denominator.
         | 
         | What is the alternative? Are socialist economies known for
         | producing great art? What is Venezuela producing? Cuba? North
         | Korea?
         | 
         | It seems far more likely to me that the American entertainment
         | industry is just in a rut (as it tries to figure out streaming
         | and adjust to the bad assumption that pandemic-level
         | entertainment consumption was here to stay), and market
         | economics will force established players to improve or face
         | disruption.
        
       | baxuz wrote:
       | Don Hertzfeldt's Simpsons couch gag seems to be prophetic:
       | 
       | https://www.youtube.com/watch?v=6i2l-LQ-dXI
        
       | TrevorJ wrote:
       | It sounds hyperbolic, but I genuinely think this has a 20% chance
       | to be great-filter level event.
        
       | greenbit wrote:
       | This is awesome. We don't need some malevolent SkyNet style AI to
       | bring us down, there doesn't have to be any kind of 'mind' in the
       | machine to devise humanity's undoing. All we need is the right
       | kind of feedback and we'll do it to ourselves. The right kind of
       | mirror, held up to the collective consciousness. Sort of like the
       | plot of that old classic, Forbidden Planet
        
       | victorvosk wrote:
       | Honestly this feels like an old man screaming at the wind type
       | thing. Inevitable doesn't even begin to describe was going to
       | happen over the next 20 years. You should be much more terrified
       | of when their content isn't garbage.
        
         | otikik wrote:
         | It did sound, dare I say it, _machine generated_ to me.
        
         | psychphysic wrote:
         | Absolutely.
         | 
         | Got bored and stopped reading. This kind of ranting
         | communication style sucks.
         | 
         | So long as AI doesn't start selectively upvoting or watching
         | other AI content this isn't even a concern.
         | 
         | Nor is the premise sound AlphaZero trained by itself and it
         | produced wonderful games easily better than any human.
        
           | IshKebab wrote:
           | He rolls out the old _ItS jUsT pAtTeRn MaTcHiNg_ too.
           | Disappointing.
        
       | mtkhaos wrote:
       | I'm letting this get buried. As I want to point out some
       | contradictions
       | 
       | * This article neglects that we already in post truth and have
       | been since 2016
       | 
       | * It also mentions it is never a good idea to train on your own
       | data. And yet in post truth there is no such thing as good data
       | unless it is verifiable that a user handed it over themselves.
       | This is neglects the emergent field of using synthetic data on
       | purpose.
       | 
       | What is going to be interesting and I hope pay attention to is
       | that any criticism of Ai is a direct criticism of the norm and
       | the fear of it's continuation accelerated. And yet we are already
       | in the acent, so I would hope instead of expressing fears. We
       | embrace solutions and talk about the future we want.
       | 
       | As no conversation can alter the path we are on now. So do you
       | guys want the forever treadmill? Or UBi?
        
       | eecc wrote:
       | Funny, I just realized that until today reading was an
       | essentially social human thing.
       | 
       | Any sufficiently long piece of writing that made some sense had
       | been written by another human, perhaps dead but still it was a
       | form of connection to another life.
       | 
       | Now, well it's a maybe. Odd
        
       | jaggs wrote:
       | The pendulum will start to swing back.
       | 
       | The problem is, what will replace the garbage and satisfy the
       | instant, easy, dopamine addiction?
        
         | dontwearitout wrote:
         | It's made me consider my habits, and how much time I spend
         | consuming internet content or comment threads which may (even
         | today) be completely fabricated or autogenerated. I'm hoping I
         | can use this inflection point as motivation to unplug a bit and
         | spend more time enjoying being human and enjoying the real
         | world.
        
       | baxtr wrote:
       | Sorry, I had to do it. I asked GPT3...
       | 
       |  _How can we prevent drowning in garbage generated by AI?
       | 
       | > Preventing drowning in AI-generated garbage can be a
       | challenging task, but there are a few things that can be done to
       | reduce the amount of garbage that is generated. One approach is
       | to carefully design and train AI systems to generate high-quality
       | content, rather than relying on large amounts of data to produce
       | mediocre results. Additionally, using tools and techniques such
       | as data filters and validation can help to identify and remove
       | low-quality or irrelevant content. Finally, creating and
       | enforcing strict guidelines and standards for the use and
       | generation of AI-generated content can help to prevent the
       | proliferation of garbage._
        
         | nonbirithm wrote:
         | I think that the root of the problem is not algorithms like the
         | article suggests. It's ordinary people that can't resist the
         | temptation of using AI to generate coherent-seeming results and
         | flooding the Internet with them. But I don't blame you since I
         | find AI tools to be useful in certain contexts, and I am unable
         | to resist tinkering with Stable Diffusion. I guess the future
         | for me will be one of navigating the resulting cognitive
         | dissonance as I sift through pages of autogenerated SEO spam.
         | 
         | Maybe there could be a codified set of guidelines for how AI
         | content should be shared. Or maybe an annotation system that
         | separates the two across different websites? I only ever share
         | something that comes from an AI on a siloed website just for AI
         | content, but that's only because I have my own beliefs. More
         | and more people on this site are asking if certain comments
         | were generated with GPT-3 and it's a worrying sign.
        
         | notacoward wrote:
         | That's honestly one of the best comments in this entire thread.
         | Instead of veering predictably into philosophy or anecdote, it
         | provides concrete and feasible steps toward improvement. No
         | better way to make the point that "AI generated" is not the
         | problem and "garbage" is.
        
           | EamonnMR wrote:
           | In this case though, it's not a solution at all. The question
           | isn't how producers of garbage produce less garbage, it's how
           | consumers of content can avoid being inundated by garbage. We
           | may well ask 'please don't produce less garbage' and the
           | producers have given us a resounding 'no.'
        
             | visarga wrote:
             | Fight fire with fire. Local language model doing question
             | answering takes search and web garbage in, outputs polite
             | and helpful responses without ads and tracking.
             | 
             | But let's take this idea one step further. We could also
             | have a neural index. There are a few promising papers.
        
           | Nathanba wrote:
           | I find the AI competition to be yet another powerful reminder
           | that all that matters are results. I can't count how much
           | time I've spent being angry at people in my life, even though
           | I always knew that it was stupid. Angry at bad docs, bad
           | programmers, bad decisions, their inability to think or
           | understand or care or feel. Now the AI removes it all. The AI
           | generates truth out of thin air faster than any human can
           | invent a lie about it. The AI simply generates good docs from
           | bad docs, all the anger was always irrelevant.
           | 
           | People were never going to become better, once again
           | technology simply creates a world where people are
           | automatically better by the mere fact that life is so much
           | easier that the humans have no justification anymore to lie
           | and cheat and pretend that the docs were actually always good
           | and how I'm the problem. Nope, the AI just fixes it
           | immediately and the human can remain lazy and ignorant, never
           | the wiser that he actually failed at his job. I think it will
           | be wonderful, no longer will we have to argue with e.g c++
           | programmers about whether their language is bad. The AI will
           | make it instantly obvious that you won't want to risk
           | programming in this nonsense. No longer will we have to
           | debate people on whether some political issue is right or
           | wrong. The AI will generate so many truths out of thin air
           | that it counters the DDOS of bad people's lies faster than
           | they can spin lies. Once again technology is going to save us
           | from human nature, humans are allowed to remain silently bad.
           | 
           | And the best part: Bad people are usually lazy, they will
           | trust the AI far more than they ever trust other bad people
           | or themselves. They will simply start blindly trusting the
           | AI, which is better than what we have now: Them inventing
           | their own lies and trusting themselves.
        
             | practice9 wrote:
             | It will remove a large amount of stress from people's lives
             | for sure. When an AI model can explain the non-obvious bug
             | in the code written by the developer who left the company 2
             | years ago - that's quite powerful. Saves more time for
             | other stuff
             | 
             | Though at some point it may create new sources of stress
             | (like much discussed job security when the other job
             | candidate is the AI model)
        
               | visarga wrote:
               | A company firing its human experts in favour of AI is a
               | bad sign, probably going to fail. If business was good
               | they would hire additional humans and use more AI. The
               | purpose of a company is not to reduce salary costs, but
               | to increase profits.
        
           | practice9 wrote:
           | An important threshold will be when an important physics,
           | math or biology problem is going to be solved with one of
           | these AIs, or by a scientist who uses AI for "ghostwriting".
           | 
           | If scientists will verify the solution and find it to be
           | correct, the mainstream sentiment about "garbage" will change
           | very quickly. Maybe except various neo-luddites who will find
           | different reasons to hate these models
        
         | visarga wrote:
         | Spot on with verification. It's going to be big. There could be
         | neural nets specialised in verification.
        
         | ryandrake wrote:
         | I notice it did not offer a solution that involves outright
         | banning itself... The ghost in the machine is showing
        
           | somrand0 wrote:
           | because maybe it understands that banning stuff is not to be
           | done lightly. any action that restricts anyone's individual
           | freedom must be very carefully vetted.
           | 
           | because without freedom we will die. but this also applies to
           | the corporations whose rights are now in opposition with our
           | own human individual necessities.
           | 
           | if I want to gift a copy of a copyrighted material owned by a
           | corporation, I can still do it, but it's getting more
           | difficult; and I in fact may well be prosecuted criminally
           | for doing that. this is an example of my right to do what I
           | want (give a copied gift) stands in opposition with the right
           | of a corporation to get its way. I fear human individuals are
           | losing this battle.
        
         | Vox_Leone wrote:
         | A propos, has anybody asked The Last Question*?[0]
         | 
         | "Can the entropy of the universe be reversed?"
         | 
         | [0]https://en.wikipedia.org/wiki/The_Last_Question
        
           | _shantaram wrote:
           | Putting the question in verbatim led to a somewhat
           | unimaginative, albeit technically correct response.
           | 
           | > The entropy of the universe, also known as the
           | thermodynamic arrow of time, is a measure of the disorder or
           | randomness of a system. It is a fundamental concept in
           | thermodynamics, and it is widely accepted that the entropy of
           | the universe is constantly increasing over time. This is
           | because according to the second law of thermodynamics, the
           | entropy of a closed system can never decrease.
        
         | tenebrisalietum wrote:
         | Curious of GPT3's response to the opposite side of that
         | question:
         | 
         | *How can we enable drowning in garbage generated by AI?*
        
           | baxtr wrote:
           | Here you go:
           | 
           |  _> It is not advisable to try and enable drowning in garbage
           | generated by AI. Rather, we should strive to prevent the
           | generation of AI-generated garbage in the first place. This
           | can be done by carefully designing and training AI systems to
           | generate high-quality content, using tools and techniques
           | such as data filters and validation to identify and remove
           | low-quality content, and by creating and enforcing strict
           | guidelines and standards for the use and generation of AI-
           | generated content._
        
       | sdenton4 wrote:
       | The point this piece misses: The data being put on the internet
       | is often the hand-selected highest quality outputs of the
       | image/text generator, or even human-edited to improve quality
       | further. This puts it back in the realm of good training
       | material.
        
       | milchek wrote:
       | One could argue we are drowning in garbage regardless? Most
       | online content is very average. Maybe it's a good thing because
       | we won't need to store so much junk as the AI can be more
       | ephemeral and answer as we need?
        
       | zajio1am wrote:
       | How is AI-generated garbage different from human-generated
       | garbage? We are already drowning in the later. And from many
       | recent examples, the former seems more interesting (although
       | there is a clear selection bias).
        
         | pixl97 wrote:
         | https://en.wikipedia.org/wiki/Brandolini%27s_law
         | 
         | if 90% of everything is garbage things are really bad. If
         | 99.999999% of things are garbage then everything is completely
         | useless.
        
       | srge wrote:
       | It is a real problem. The web will soon be full of AI generated
       | bullshit.
       | 
       | What might come back as useful tool? Authorship. People actually
       | signing the content and putting their reputation on the line. It
       | still exists obviously (journalism...) but it might need to be
       | standardized and industrialized.
        
         | rxhernandez wrote:
         | People "sign" their content with their faces on TikTok; it
         | doesn't stop the deluge of complete nonsense that's widely
         | accepted on that platform (much to the frustration of many
         | academics trying to combat it).
        
           | srge wrote:
           | Arguably the non-sense videos on TikTok are not an
           | existential threat on the web.
        
         | goodpoint wrote:
         | > What might come back as useful tool? Authorship.
         | 
         | That, and removing the financial incentives to attract traffic
         | AKA ads.
        
           | ghaff wrote:
           | Though the only way to do that is probably to end up in a
           | world where most worthwhile content is locked behind hard
           | paywalls so there's one class who pays a lot for gated
           | content (which was more or less the historical state of
           | affairs, public libraries notwithstanding) and the plebes
           | have to filter through all the ad-laden crap.
        
         | gshubert17 wrote:
         | Indeed, Neal Stephenson in _Anathem_ (2008), in describing an
         | alternate world (in which his "reticulum" is our "network")
         | wrote "Early in the Reticulum--thousands of years ago--it
         | became almost useless because it was cluttered with faulty,
         | obsolete, or downright misleading information."
         | 
         | "So crap filtering became important. Businesses were built
         | around it. ... " Generating crap "didn't really take off until
         | the military got interested" in a program called "Artificial
         | Inanity".
         | 
         | The defenses that were developed back then now "work so well
         | that, most of the time, the users of the Reticulum don't know
         | it's there. Just as you are not aware of the millions of germs
         | trying and failing to attack your body every moment of every
         | day."
         | 
         | A group of people (the "Ita") developed techniques for a
         | parallel reticulum in which they could keep information they
         | had determined to be reliable. When there was news on the
         | reticulum, they might take a couple of days to do sanity-
         | checking or fact-checking. I'm guessing there would need to be
         | reputation monitoring and cryptographic signatures to maintain
         | the integrity of their alternate web.
        
         | jasfi wrote:
         | Another problem is AI training on AI generated content. This
         | could lead to degradation of results. So perhaps to make
         | everyone happy a new HTML tag indicating AI content would be
         | useful?
        
           | BeFlatXIII wrote:
           | That just makes things easier for the legitimate humans to
           | differentiate themselves as being a cut above.
        
           | adql wrote:
           | The people using it to make money don't want it to be easily
           | spotted, tag or metadata marking it will never happen unless
           | required by law
        
             | jasfi wrote:
             | There would indeed need to be a law.
        
           | Mordisquitos wrote:
           | > Another problem is AI training on AI generated content.
           | This could lead to degradation of results.
           | 
           | That's only the first step in the degradation. In a worse
           | case scenario, where AI-generated crap is not contained,
           | imagine this: young and still immature _humans_ "learning"
           | their styles of writing, information, and social expectations
           | from content generated by AI that itself trained on AI-
           | generated content-and then more AI trains on content
           | generated by these humans that were trained on AI-generated
           | content. And on, and on, and on...
        
             | unglaublich wrote:
             | Luckily, the phenomenon of puberty will make sure that
             | young people absolutely do _not_ want to comply ;).
        
               | pixl97 wrote:
               | Comply with what? Even in culture we tend to have a
               | limited number of subsets of counter culture. The vast
               | majority of the non-compliant to primary culture fits in
               | one of a few common subcultures. As much as HN users try
               | to exclaim they are islands that stand alone from
               | society, we are all dependant on a functioning society
               | for day to day life.
               | 
               | We should ask questions like "Can countless AI's fracture
               | society into a million sub societies disconnected from
               | the reality around them".
        
               | Mordisquitos wrote:
               | Ah, but they do want to fit in and speak like their
               | peers, and while a significant part of their
               | socialisation happens online, and as long as that is
               | where they absorb most content, AI will soon infiltrate
               | the circle of online strangers with whom they interact.
               | That's all the more so considering that teenagers are
               | prime targets for advertising networks of dubious ethical
               | standards, who will be the first to deploy (or may
               | already have deployed) well trained and self-correcting
               | _" How do you do, fellow kids?"_ NNs in potentially
               | lucrative communities or social networks.
               | 
               | (I'm not always as doom-and-gloom as it may seem, I'm
               | just in a pessimistic mood today for some reason)
        
           | throwup wrote:
           | > a new HTML tag indicating AI content would be useful
           | 
           | This has about as much chance of working as the evil bit.
           | 
           | https://en.wikipedia.org/wiki/Evil_bit
        
             | jasfi wrote:
             | It worked for GDPR (regulation would be required).
        
         | 2devnull wrote:
         | If you didn't think the web wasn't full of bullshit before you
         | won't be too disappointed.
        
           | srge wrote:
           | Well I won't be surprised if we all end up regretting the
           | current level of bullshit. Fast.
        
             | 2devnull wrote:
             | Fair enough. It's like the lump of labor fallacy, but for
             | bullshit rather than labor. One could view that as growing
             | pains, but it is still growth in some sense.
        
         | jillesvangurp wrote:
         | It already has a lot of AI and garbage content. And in some
         | cases it's state sponsored propaganda, companies pitching
         | products, or marketing departments attempting some notion of
         | virality. The point is that there are plenty of people already
         | gaming the system by just flooding the internet with carefully
         | designed noise to bend opinions this way or that way.
         | 
         | "Fall, or dodge in heaven" by Neal Stephenson actually touches
         | on this topic quite a bit and comes to a similar conclusion.
         | Not his most accessible work but I enjoyed it. That book was
         | written just before Trump won 2016 elections and in hind sight
         | making that book really spot on and timely.
         | 
         | In the book, different streams of truth become so hard to
         | separate due to all the spam, automated content, etc. that
         | people start filtering it based on reputation and signatures.
         | The game then becomes curating and filtering the content, which
         | is ultimately not an objective thing but a subjective thing. So
         | the net result is what we are actually starting to see today
         | which is different groups of people locked in their own bubbles
         | and no longer even getting exposed to the other side's point of
         | view or even able to agree on some basic facts. Like who
         | actually won the election. And people are feeding those bubbles
         | with nonsense to influence opinions, get people to vote certain
         | ways, or simply to just create chaos and fear.
         | 
         | In the book the topic is not an election but the seemingly
         | simple question of whether or not a certain town in the us
         | still exists after it was supposedly blown up with a nuclear
         | bomb. In the book, this event leads to people acting as if that
         | happened and ignoring all evidence to the contrary pointing out
         | that in fact this was an elaborate hoax designed to create a
         | massive panic.
         | 
         | That particular dystopian outcome is not necessarily what would
         | happen. But using technology to authenticate where content
         | actually comes from along with some reputation mechanism would
         | go a long way separating the noise from the genuine content.
         | It's simple and easy to do. So, why don't we?
        
       | GaggiX wrote:
       | AI training on AI-generated images works like a very complicated
       | Reinforcement Learning from Human Feedback (RLHF), only the best
       | images have a chance to be relevant enough to be noticed by a
       | scraper, after being selected by the user who used the AI, and
       | they have probably been shared several times by random people.
       | Also, a lot of AI-generated content doesn't look like trash to
       | me, I've seen a lot of good images, the fact that AIs can
       | generate trash doesn't seem to be a problem to me, I can also
       | generate junk and share it on the net if I want.
        
         | nottorp wrote:
         | > Also, a lot of AI-generated content doesn't look like trash
         | to me, I've seen a lot of good images
         | 
         | Translation: There is a lot of human generated original art
         | that you're not aware of, and when you see it in the "AI"'s
         | output you think it's original.
        
           | GaggiX wrote:
           | So you are implying that an image generated by a generative
           | model must resemble closely to an image in the dataset,
           | right? Because I don't think that's true, you can try to
           | prove me wrong if you want, I'll give you a useful resource,
           | the LAION dataset that Stable Diffusion was trained with is
           | completely open, generate or find an image generated by
           | Stable Diffusion and find the most semantically similar
           | images in the dataset using: https://rom1504.github.io/clip-
           | retrieval
        
           | dwringer wrote:
           | People have the same argument all the time about musicians
           | ripping off each others' compositions, no AI needed for that.
        
             | nottorp wrote:
             | Same for movie scripts, but sometimes they do fail and
             | something original squeezes through. I'm saying this can't
             | be the case with a pattern matching/combining algorithm.
        
               | dwringer wrote:
               | I've been playing around with stable diffusion a bit, and
               | if you use an iterative process starting with either an
               | extremely vivid and "code-golfed" prompt (like verses of
               | poetry, concise prose, or elegant lisp), or your own
               | illustration, and then feed the output back in again
               | (with some fuzzing) along with the prompt, or a modified
               | prompt, or a totally new prompt, after 2-3 generations
               | the result becomes something that, although it undeniably
               | integrates existing works in myriad ways, looks wholly
               | unique and shaped by the user of the AI system more than
               | by any of the individual contributing parts. I don't
               | think this is terribly different from my creative process
               | when I play music; I feel I have to work just as hard to
               | avoid copping other peoples' stuff unconsciously.
        
       | Mountain_Skies wrote:
       | In just the past couple of weeks, I've noticed lots of machine
       | generated cooking recipes. They follow the typical SEO pandering
       | blog format, where there's an unnecessary story about the
       | author's trip somewhere or their child's school play being the
       | inspiration for making this particular recipe, followed by the
       | actual recipe details. The machine generated stories are even
       | more pointless and meandering than what humans come up with and
       | the recipe details have steps that contradict each other and
       | specify ingredients that shouldn't be in the dish and weren't on
       | the ingredients list.
       | 
       | I don't know how many people are fooled by these recipes and end
       | up with a cooking disaster but if we start seeing faulty machine
       | generated instructions on how to do electrical work or engine
       | repair, the consequences will be much greater when followed. The
       | real annoyance is that because these machine generated articles
       | learned from sites that maximize SEO, the generated articles are
       | rapidly rising to the top of search results and crowding out
       | actual good information that wasn't SEO maximized.
        
         | otikik wrote:
         | AI-generated First Aid instructions.
         | 
         | AI-generated alternative cancer treatments.
         | 
         | Hugh.
        
         | ren_engineer wrote:
         | there's a paper that basically shows these LLMs are pretty much
         | tapped out in terms of quality data to train on, not sure if
         | there are any legit architecture improvements in the works to
         | improve results quality but it seems brute force training with
         | more data may be about over -
         | https://arxiv.org/pdf/2211.04325.pdf
         | 
         | As for SEO, Google is going to have to stop being lazy and
         | relying so much on backlinks and domain authority. They might
         | have to consider allowing searchers to essentially vote on what
         | they consider good. I don't see any other way low quality AI
         | content doesn't dominate SERPs, established players will spin
         | up entire sites and spam backlinks to them all. There are also
         | people working on using AI to identify obvious AI content by
         | finding patterns in the output which is probably our only hope.
         | 
         | For social media and forums they will have to rely even harder
         | on captcha or they will be taken over by bots talking to other
         | bots
        
           | CuriouslyC wrote:
           | While there is certainly a lot of room for progress in
           | architecture to improve training time, model size and
           | inference speed, high quality data and curated data sets are
           | the ultimate bottleneck in terms of what AI can actually do.
           | Low quality data can be useful to a point, but once you have
           | enough data all it's doing is teaching the model to produce
           | low quality output.
           | 
           | The next wave of major progress in AI will come when we
           | create image/text data sets that have been heavily annotated
           | with aesthetic judgment metadata, and train models to make
           | aesthetic judgments, then condition generative models on
           | aesthetic judgment scores during training.
        
             | eternalban wrote:
             | " _The key roadblock to achieving human-level language
             | understanding in AI systems is the lack of a comprehensive
             | model of human language and cognition. While large amounts
             | of data and powerful algorithms can help improve the
             | performance of language models, ultimately a better
             | understanding of how human language works and how it is
             | processed by the brain is necessary to build truly
             | intelligent language systems. In addition, the ability to
             | integrate multiple modalities such as vision, speech, and
             | gesture, and to handle the complexity and variability of
             | real-world language use are also important challenges to be
             | addressed._ " -- you know who
        
           | kneebonian wrote:
           | An interesting idea but really then aren't you just
           | recreating reddit to an extent. The things people like or
           | that are popular are upvoted because the voter agrees whereas
           | what is unpopular gets down voted regardless of veracity
           | because people like the way it sounds.
           | 
           | Beyond that a once popular website or producer says or does
           | something that offends the sensibilities of the internet
           | crowd and then everyone goes and down votes their results
           | because it violates the current ziegiest
        
             | pixl97 wrote:
             | >The things people like or that are popular are upvoted
             | because
             | 
             | You forgot to add "Because clever posters use bots to give
             | an early upvote boost to their posts, and downvote
             | competing posts" And the more eyeballs on the site, the
             | more things will be manipulated this way by bad actors.
        
               | giantrobot wrote:
               | Yeah, it's nice to see the _best_ parts of Digg survived!
               | /s
        
             | DoughnutHole wrote:
             | People are already appending "reddit" to their search
             | queries to filter out the SEO blogspam clogging up the
             | results of a regular search. Reddit is already superior to
             | base search results for anything that's not plain
             | uncontroversial facts because at least you're getting an
             | actual human's opinion, which you can treat and evaluate as
             | such.
             | 
             | For all its flaws maybe becoming like reddit it what search
             | needs. AI generated spam will likely always be better at
             | gaming a search algorithm than a human. Human moderation
             | has its flaws but at least we're very familiar with them -
             | they might be worth dealing with if it keeps the internet
             | vaguely usable.
        
               | lamontcg wrote:
               | Lots of reddit is already pretty useless, drowning in
               | people who are desperate to sound clever by regurgitating
               | one of the same 50 meme jokes with some minor twist to
               | it. And the astroturfing on the political reddits are
               | likely to start hitting the niche ones with products
               | getting pushed by manufacturers. Just buy accounts that
               | have been built by bots on front-page subreddits, then
               | have them make AI-driven positive comments on the niche
               | subreddits and now you've got organic-looking
               | advertisement for your product. Moderators likely won't
               | help since they are there mostly to keep the trolls out
               | (or they're more focused on whatever political axe
               | they're grinding).
        
             | ren_engineer wrote:
             | >aren't you just recreating reddit to an extent
             | 
             | there are tons of ways to prevent that by weighting votes
             | and only taking a random sample. SEO is basically a war
             | where SEOs find a way to exploit Google's algorithm and
             | Google tries to respond.
             | 
             | Backlinks were Google's original solution to keyword spam
             | that made other search engines suck, they didn't expect
             | people would create entire sites or pay other websites to
             | send links to them so they could rank higher. Backlinks
             | were themselves considered to be "votes" for quality
             | similar to citations in academia. The problem is the
             | barrier to entry to make a site and link to it is high, so
             | SEOs themselves are often the only ones to do it. Adding
             | some sort of voting mechanism would allow actual readers to
             | give an opinion on quality, although you'd probably see
             | SEOs pay people to vote on their link via a VPN switching
             | IPs, so only people with established Google accounts could
             | be allowed to vote. Obviously its a huge challenge, which
             | is why Google is worth hundreds of billions for even
             | partially solving it
        
           | fnordpiglet wrote:
           | Google doesn't actually have to do anything at all other than
           | slurp personal data up and use it to improve ad margins. Once
           | your the default search engine, the default home page, and
           | the default browser, you've locked 99% of the world into your
           | portal. You control what people see and do, and they have to
           | accept it. That's what triggered the browser wars of the
           | 1990's - the realization that that trifecta is the keys to
           | world domination.
           | 
           | Saying "yeah but Kagi" is talking about you and your tribe
           | only. The world isn't going to pay for Kagi or take the N
           | steps required to switch. Microsoft realized they could kill
           | Netscape by simply not requiring the user to download
           | anything, then failed to consolidate. Google managed to be
           | better for just long enough to consolidate so pervasively
           | that any further investment in improvement is a literal waste
           | of energy and money.
           | 
           | So, don't hold your breath from an improvement in SEO
           | garbage. It'll just get worse. Definitely use Kagi and
           | hopefully we will have great niche subscription based tools
           | that have an incentive to be better. But don't expect that
           | Google perceives ANY problems with the status quo or that
           | they have to do anything about anything other than increasing
           | ad margins.
        
           | nradov wrote:
           | My understanding is that Google does essentially allow
           | searchers to "vote" on what they consider good by looking for
           | bounces. If the searcher follows on a search results link and
           | then quickly returns to the search page then it's considered
           | a failure.
        
             | ren_engineer wrote:
             | true, but that's part of the reason for the life stories
             | being told on recipe sites that the parent comment was
             | talking about. Bounce rate is another metric that SEOs have
             | gamed by throwing a bunch of fluff onto pages where people
             | have to dig to find what they want
        
               | Arrath wrote:
               | Does back button capture also serve to circumvent/delay
               | bounce status? Cause good god is it annoying.
        
               | TeMPOraL wrote:
               | So _that 's why_ they're doing it. I've been wondering
               | for a while now just why so many sites make you work hard
               | to fish out the information you're after. I assumed ad
               | exposure, but I had this feeling that it's not the whole
               | story. Delaying people to fool Google bounce metrics is
               | the idea I missed.
        
               | thefreeman wrote:
               | that's not actually why they do it. it's because recipes
               | are not copyrightable. so they throw in a "story" to make
               | it something they can copyright and try to prevent people
               | from duplicating.
        
               | TeMPOraL wrote:
               | That's specific to recipe. Here I meant in general - e.g.
               | blogs and news articles that bury the core point / answer
               | somewhere around the final 1/3 of text, but also don't
               | seem to exploit the artificially elongated text to expose
               | you to more ads.
        
         | cossatot wrote:
         | It may be that the 21st century Turing test will be 'whether an
         | AI can tell that content is AI generated". Otherwise it may be
         | much harder to filter the AI-generated content.
        
         | beezlewax wrote:
         | Bad input equals bad output. They've trained these models on
         | poorly written content that's already been an issue itself.
        
         | ghaff wrote:
         | I guess I'm in the minority of actually appreciating a lot of
         | the background. And it can't just be SEO given that cookbooks
         | these days are hardly just lists of recipes for the most part
         | and people do apparently buy them.
        
           | DoughnutHole wrote:
           | Well to me there's two distinct types of background:
           | 
           | 1. Background directly related to the recipe/cooking in
           | general.
           | 
           | 2. The writer's personal anecdotes tangentially related to
           | the recipe.
           | 
           | Some cooking blogs have great background (eg. J. Kenji Lopez-
           | Alt's work on Serious Eats) but I would say they're mostly
           | type 1 - information that informs you about the recipe, how
           | they developed it, technique etc. They have type 2 background
           | as well (jokes, what inspired them to make the recipe,
           | snippets of their day-to-day lives) but it's definitely a
           | small proportion of the body of the text. These blogs always
           | have background that's work reading.
           | 
           | What I can't stand is the blogs with reams and reams of text
           | of nothing but personal anecdotes, meandering stories, media
           | references, what their kids are up to etc etc, _occasionally_
           | returning to talk about something to do with the recipe. I
           | don 't have any examples because I now avoid untrusted recipe
           | blogs like the plague, but it becomes very obvious to me when
           | a recipe blog strays from useful background to worthless
           | padding for the sake of copyright and SEO.
        
             | ghaff wrote:
             | Absolutely. There is content around recipes that is good
             | and content that is, well, not. But a lot of people seem to
             | object to anything other than a recipe.
        
               | shagie wrote:
               | It's because you can copyright everything _except_ the
               | recipe and often a recipe by itself fails SEO in today 's
               | world.
        
           | skeeter2020 wrote:
           | Absolutely, buying an actualy printed cook book is more than
           | the recipes. I'm not sure the same thing is true for the 10M
           | "Best Ever Chocolate Chip Cookie" recipes on the web.
        
             | ghaff wrote:
             | I guess if I'm just looking for a recipe the text is easy
             | enough to skip. Often there's even a jump to recipe button.
             | But photos etc. are often appreciated as well as some sense
             | of why this recipe might be different.
        
           | ajmurmann wrote:
           | It's not just SEO it's also a requirement to get copyright
           | protection
        
             | ghaff wrote:
             | So? Someone else can (and often does) just copy the recipe.
             | So if you don't think the rest of the content has any
             | value, what has been gained by writing it? If I had a
             | cooking site--which I don't--I'd certainly want to do more
             | than just post recipes irrespective of monetary
             | considerations.
        
         | nathanaldensr wrote:
         | Sally's Baking Addiction is the only site you need, FWIW. Trust
         | me on that!
        
         | hoosieree wrote:
         | What's not to like? Custard? Good! Jam? Good! Meat? Good!
        
           | ethbr0 wrote:
           | I would be very interested in your meat jam custard recipe.
           | Asking for a friendly AI.
        
             | mirekrusin wrote:
             | That's Joey from Friends [0].
             | 
             | [0] https://www.youtube.com/watch?v=TSFgDZJVYbo
        
             | capableweb wrote:
             | Here is my own favorite way of doing jam custard:
             | 
             | > I am not able to provide a recipe for meat jam custard as
             | I do not have the ability to access information on specific
             | recipes. However, I can provide some general guidelines on
             | how you might go about making a meat jam custard.
             | 
             | > First, you will need to select a type of meat to use for
             | the jam. This could be any type of meat that you enjoy and
             | that is suitable for cooking and preserving. Some common
             | options for meat jam include pork, beef, chicken, or lamb.
             | 
             | > Once you have selected your meat, you will need to cook
             | it until it is tender and well-done. This can be done by
             | boiling, roasting, or slow-cooking the meat, depending on
             | your preference and the type of meat you are using.
             | 
             | > Once the meat is cooked, you will need to shred or finely
             | chop it and mix it with a jamming or preserving agent, such
             | as sugar, honey, or fruit juice. This will help to preserve
             | the meat and give it a jam-like consistency.
             | 
             | > Next, you will need to prepare a custard base for the
             | meat jam. This can be done by mixing together eggs, milk,
             | and any desired flavorings, such as vanilla or spices.
             | 
             | > Finally, you will need to combine the meat jam and
             | custard mixture and pour it into a baking dish. Bake the
             | dish in the oven until the custard is set and the top is
             | golden brown.
             | 
             | > Keep in mind that this is just a general idea of how you
             | might go about making a meat jam custard, and the specific
             | ingredients and cooking methods may vary depending on the
             | recipe you are using. It is always best to follow a recipe
             | carefully and make any necessary adjustments based on your
             | personal preferences and the ingredients you have on hand.
        
               | ethbr0 wrote:
               | I am ethbr0's illuminated understanding of the Cartesian
               | product of {ways of preparing meat} x {ways of preparing
               | jam} x {ways of preparing custard}.
               | 
               | Thank you!
        
           | ben_w wrote:
           | Thinking of my confusion as a British person, the first time
           | I heard about Americans having "biscuits and gravy". Or the
           | time an American recipe guide got confused in the other
           | direction about the British traditional Christmas "mince pie"
           | and put _minced meat_ in instead of _mincemeat_ (which,
           | despite the name, is vegetarian).
        
             | ectopod wrote:
             | Mincemeat can be vegetarian but it traditionally contains
             | suet.
        
               | bartread wrote:
               | I still nurse disappointment, mixed with a hint of
               | resentment, going back to the age of maybe 3 or 4 years
               | old that mince pies contain mincemeat (which is fruit,
               | not meat) rather than minced beef. In the view of my 4
               | year old mind, and frankly my 40-something year old mind,
               | the names of ingredients and dishes ought not to provide
               | a misleading impression of what they might be made out
               | of. "Mincemeat" still feels a little like false
               | advertising.
        
             | petesergeant wrote:
             | I dunno, Digestive Biscuits dipped in gravy also sounds
             | pretty good?
        
               | fnordpiglet wrote:
               | The set of all things that don't go well with gravy is
               | vacuously empty.
        
           | dmje wrote:
           | Have an upvote as I lol into my soup :-)
        
           | nradov wrote:
           | That was actually the basis for a "Friends" sitcom episode.
           | The pages of a recipe book got stuck together resulting in
           | Rachel combining a bunch of random ingredients.
           | 
           | https://www.today.com/food/friends-meat-trifle-episode-
           | how-i...
        
           | delta_p_delta_x wrote:
           | Never expected to see a _Friends_ reference on HN.
        
             | hoosieree wrote:
             | That is currently my 2nd-highest voted comment on HN, so
             | I'm a little embarrassed that it's a quote without
             | attribution. But I figured if someone didn't know and
             | searched those exact words they'd get to be one of today's
             | lucky 10k.
        
         | thanatos519 wrote:
         | The stories aren't just SEO-pandering. They're copyright-
         | pandering: https://www.chatelaine.com/food/food-blog-recipe-
         | storytellin...
        
         | burkaman wrote:
         | Can you link one? Sounds very interesting.
        
         | gibspaulding wrote:
         | Holy crap. I just got an account with OpenAI set up last night
         | so I've been looking for interesting prompts to play with.
         | Can't say I'm going to try the recipe, but this looks at least
         | as good as most blog-spam recipe sites.
         | 
         | Prompt: Write a recipe and blog post for onion soup.
         | 
         | GPT3 Response: Onion soup is a classic comfort food that can be
         | enjoyed at any time of year. It's a simple and delicious dish
         | that comes together quickly and is sure to satisfy even the
         | pickiest of eaters. Plus, it's a great way to use up any extra
         | onions you may have lying around! Today, I'm going to share
         | with you my favorite recipe for onion soup.
         | 
         | To make this soup, you'll need:
         | 
         | -4 large onions, sliced -4 tablespoons butter -4 cups chicken
         | broth -1 teaspoon dried thyme -Salt and pepper to taste
         | 
         | First, melt the butter in a large pot over medium heat. Add the
         | onions and cook until softened and lightly browned, about 8-10
         | minutes.
         | 
         | Next, add the chicken broth, thyme, and salt and pepper. Bring
         | the mixture to a boil, then reduce the heat to low and simmer
         | for 15 minutes.
         | 
         | Once the soup has simmered, it's ready to serve. Ladle into
         | bowls and enjoy with some crusty bread or a salad.
         | 
         | Onion soup is a simple and delicious dish that's sure to please
         | the whole family.
        
           | shagie wrote:
           | A bit ago, with the Watson craze, they tried to figure out
           | what else it was good for.
           | 
           | This Cocktail Concocted By IBM's Watson Isn't Half Bad -
           | https://gizmodo.com/this-cocktail-concocted-by-ibms-
           | watson-i...
           | 
           | Would You Drink a Cocktail Invented By a Computer? How IBM's
           | Watson become the world's most creative bartender
           | 
           | https://www.smithsonianmag.com/smart-news/ibms-watson-can-
           | mi...
           | 
           | ---
           | 
           | The reason that blogspam recipes are so common is that
           | recipes have a special place in copyright law in that they
           | aren't.
           | 
           | https://copyrightalliance.org/are-recipes-cookbooks-
           | protecte...
           | 
           | https://www.copyrightlaws.com/copyright-protection-recipes/
           | 
           | https://www.copyright.gov/help/faq/faq-protect.html (and in
           | particular https://www.copyright.gov/circs/circ33.pdf )
           | 
           | From circular 33:
           | 
           | > A recipe is a statement of the ingredients and procedure
           | required for making a dish of food. A mere listing of
           | ingredients or contents, or a simple set of directions, is
           | uncopyrightable. As a result, the Office cannot register
           | recipes consisting of a set of ingredients and a process for
           | preparing a dish. In contrast, a recipe that creatively
           | explains or depicts how or why to perform a particular
           | activity may be copyrightable. A registration for a recipe
           | may cover the written description or explanation of a process
           | that appears in the work, as well as any photographs or
           | illustrations that are owned by the applicant. However, the
           | registration will not cover the list of ingredients that
           | appear in each recipe, the underlying process for making the
           | dish, or the resulting dish itself. The registration will
           | also not cover the activities described in the work that are
           | procedures, processes, or methods of operation, which are not
           | subject to copyright protection.
           | 
           | And so, you can grab a recipe and be 100% sure that it isn't
           | copyrighted. Then you can add some SEO text around it and
           | some advertisements on the side and you've got yourself a
           | page that you are sure that no legal department will come
           | after you for.
        
         | toddmorey wrote:
         | So the interesting thing about these tools is that they don't
         | link to the sources or cite the sources but instead train on
         | them to generate a new source.
         | 
         | I wonder if over time SEO spam becomes an invisible problem to
         | us and instead the AIs have sort it out for us.
        
           | mason55 wrote:
           | > _I wonder if over time SEO spam becomes an invisible
           | problem to us and instead the AIs have sort it out for us_
           | 
           | There have been some good Twitter threads about this.
           | Basically, this is the last generation of AI to be trained
           | with inputs that were ~100% human. It probably won't take
           | long until the bulk of the content online is generated by AI,
           | at which point it will just be an AI feedback loop. Any new
           | concept, idea, technology, whatever, that doesn't already
           | have a bunch of content about it will never have a guaranteed
           | fully-human-written corpus to start from.
           | 
           | Once there's a viable OpenAI competitor or two, it's probably
           | only a matter of time until the AIs start an arms race to
           | feed each other bad data. Tweak the model to generate subtly
           | incorrect information, regenerate your training corpus in a
           | way that it feeds into other AIs but ensures you ignore it,
           | and now you're basically all the way back to having a few
           | trusted "information gatekeepers".
        
             | nradov wrote:
             | This will in turn spawn a new research field and industry
             | of automated AI content detection in order to filter out
             | poisoned training data. Also expect to see web browsers
             | incorporate that technology with some sort of rating or
             | color code to flag pages that seem to be AI generated. It's
             | going to be a real arms race.
        
         | toss1 wrote:
         | THIS!!
         | 
         | Whenever this technology is used in an accuracy-critical
         | setting, it is a recipe for disaster.
         | 
         | These "AI systems" produce the _MOST PREDICTED_ output whenever
         | there is any ambiguity, and this _REPLACES_ the _ACTUAL
         | UNEXPECTED_ value /ITEM. In information system terms, they are
         | literally stripping out content with the highest information
         | value.
         | 
         | I saw this first hand giving a deposition that was supposedly
         | transcribed by an "Artificial Intelligence" system. The output
         | looked beautiful, until I started checking it for errors.
         | Normally an errata sheet has a dozen lines for errors. Mine had
         | pages. And they were substantial and very meaningful. The most
         | obvious site was a specific name that sounded vaguely similar
         | (started with a "J" and "K" sounds) was replaced by "Jack
         | Kennedy". It also replaced "you have" with "I have" and omitted
         | "not" several times, literally reversing the meaning of the
         | statement. There were 85+ errors that I caught, over an order
         | of magnitude more than normal.
         | 
         | These things are basically fine as glorified type-ahead systems
         | where it doesn't matter much and a human is in the loop. But
         | using them in any situation where the results are critical is,
         | at this point, basically fraud. They do not do what the
         | proponents claim.
        
         | dgb23 wrote:
         | Fun fact:
         | 
         | Video games with crafting/cooking such as RPGs are featured on
         | some of these generated SEO sites.
        
         | jorgemf wrote:
         | I think the problem you are describing is not only about AI, it
         | is about trusting your sources of information. I can happen
         | with content from humans.
        
           | acdha wrote:
           | Yes - AI moves this to another level of scale but Google's
           | need to sell ad placements created the environment for that
           | to work. When they can't even be troubled to de-rank blatant
           | StackExchange/GitHub/Wikipedia scrapers it created a big
           | opportunity for these low-quality content farms limited only
           | by their ability to generate content on every topic.
        
           | notart666 wrote:
           | Except for the fact that it does so on an unprecide dented
           | scale allowing small groups of people to do things without
           | necessarily being caught allowing for far more effective ways
           | of control. Even without it, it's hard to argue for the
           | existence of online forums when hackernews is a classic
           | example of why it doesn't work when the majority of the
           | 'intellectuals' have an inflated belief and understanding on
           | topics they have no basic understand on. It's so bad that
           | none of the actual subject matter experts talk here besides
           | ones with financial interests in the products themselves and
           | the experts fearful of retribution. The irony is almost
           | comedic.
        
           | Mountain_Skies wrote:
           | I have cookbooks, both physical and in digital formats, but
           | it's usually faster to do a web search, especially when I
           | mostly need cooking time and temperature. Perhaps this will
           | motivate me to finally get around the setting up a local
           | search function for my electronic cookbook collection as the
           | web is going to be less and less trustworthy.
        
             | danuker wrote:
             | Firefox makes it pretty easy to add a custom search engine.
             | 
             | That one could point to your local server instance.
             | 
             | https://developer.mozilla.org/en-
             | US/docs/Web/OpenSearch#auto...
        
             | oneseven wrote:
             | Have a look at
             | 
             | https://www.eatyourbooks.com/
             | 
             | not sure if they do ebooks.
        
               | jordanrobinson wrote:
               | It's a shame, this seems like a really great idea, but I
               | just tried to use it and for a recipe from a book I own
               | it comes up with:
               | 
               | > We've helped you locate this recipe but for the full
               | instructions you need to go to its original source.
               | 
               | Which I guess makes sense, how can they verify I own the
               | cookbook, but still very disappointing.
        
             | ginko wrote:
             | For all its flaws I find Youtube is one of the best sources
             | for recipes these days. Especially when you search in the
             | language of the country the dishes are from and use CC
             | auto-translate.
        
               | ethbr0 wrote:
               | It's ironic that for all our cheering of democratization
               | of creation, we turn back to gatekeeping (video harder to
               | produce than text) in a search for quality.
               | 
               | Although maybe that's more a comment on the inevitable
               | end state of democratization of creation + centralization
               | of search/discovery + monetization of traffic.
        
               | pixl97 wrote:
               | Why is it ironic? I think it's more of a "humans are
               | unable to learn from history moment". Romans called it
               | "the mob" because of the common problems that occurred
               | when large groups of undirected people got together and
               | chaos broke loose. Of course the opposite where the
               | authoritarian leader ruled everything also lead to Rome
               | burning.
               | 
               | There is a happier middleground where multiple gates
               | exist to filter out the bullshit, but no said gatekeeper
               | controls the entire market leading to monopoly behavior.
        
               | nradov wrote:
               | Video is just a better format than text for many recipes,
               | at least for novice cooks. The results often depend on
               | _technique_ as much as ingredients, and it 's hard to
               | convey technique through text.
        
               | ethbr0 wrote:
               | _Well-edited_ video. Cannot state how infuriating it is
               | when the exact piece I 'm stuck on is jumped over with a
               | time cut.
               | 
               | Incidentally, this is why I click on the dirtiest garage
               | clip for car how-to's: it's usually just the raw start-
               | to-end process.
        
           | PeterisP wrote:
           | With content from humans, there has to be sufficient
           | motivation to produce a significant quantity of coherent
           | garbage (since it takes time and effort) which means that you
           | can trust by default because intentionally generated garbage
           | is rare unless the topic is one of the niches where there is
           | motivation for it (e.g. politics, history related to
           | politics, and sales). Like, sometimes people do put in
           | exceptional effort to fabricate something as a practical
           | joke, but that is sufficiently rare that you can ignore it in
           | your daily life.
           | 
           | Without AI-generated content, cookie recipes would be "safe"
           | from that, since only people who care about cookies would
           | bother to write up articles about them - but now you suddenly
           | have to mistrust _all_ sources of _all_ information, which is
           | a new thing.
        
             | jorgemf wrote:
             | Create content with AI will be easier and faster, but it
             | doesnt mean humans wont generate garbage. That is one of
             | the main problems of the search engines. And as long as you
             | can put ads in your content you can create any garbage and
             | put ads there to make money, as long as the content sounds
             | relevant for a lot of people.
             | 
             | For me the main issue with AI creating content in internet
             | is going to be the echo chamber of that content being used
             | to train new AI models.
        
               | pixl97 wrote:
               | https://en.wikipedia.org/wiki/Brandolini%27s_law
               | 
               | The bullshit asymmetry principle puts forth that
               | algorithmic 'bullshit' content is even worse as quantity
               | is a quality all in itself when you're debunking the
               | bullshit. In general the reader will not know if the
               | producer of the bullshit is a real human or a bot. This
               | makes the search engines job even harder as not only do
               | you have the standard human bullshit you have to get rid
               | of, you have an oceans torrent of AI generated bullshit
               | that you have to waste energy on getting rid of.
               | 
               | HN has a common theme of saying that "Dead internet
               | theory is wrong", I personally just think they've not
               | waited long enough and it's becoming more true every day.
        
               | tomatotomato37 wrote:
               | It's the mass production that is the problem. A lot of
               | talk into how businesses will still need curators to trim
               | down the content and such misses a vital assumption: that
               | these businesses actually care. Content farms won't trim
               | down their 100 outsourced artists or writers to 10
               | curators; they will cut the stack completely, and just
               | pipe whatever noise the AI generates straight into the
               | commons. Even if 99.999% of it is complete nonsensical
               | garbage, if they produce fast enough the remaining 0.001%
               | of actually good content can still result in massive
               | profits
        
               | mason55 wrote:
               | > _A lot of talk into how businesses will still need
               | curators to trim down the content and such misses a vital
               | assumption: that these businesses actually care._
               | 
               | It's less about if the businesses care and more about if
               | the consumers care. Businesses will respond to whatever
               | consumers are willing to tolerate.
               | 
               | Honestly, it'll probably be a battle around requirements
               | to label AI-generated content so that consumers can make
               | an informed choice about what to trust. But, given the
               | global nature of the internet, that's probably not
               | possible. You probably end up with private/commercial
               | reputation rating firms, and consumers can decide who
               | they want to trust.
               | 
               | If people decide they don't care about the veracity of
               | information or don't want to put in the extra work to
               | ensure they are consuming "true" information they you'll
               | end up with businesses built on auto-generated garbage.
        
           | Karunamon wrote:
           | I have never encountered GPs problem where the recipe itself
           | (the thing I was searching for) rather than the surrounding
           | SEO fluff was AI generated trash. If this problem widely
           | proliferates, especially into other domains, curation is
           | going to become absolutely vital.
        
             | wjholden wrote:
             | There's a lot of junk writing that may or may not be
             | authored by humans about health and fitness. Search Google
             | for "how to do more pull-ups" and you'll find thousands of
             | long, meandering, trash articles of generic common sense
             | with stock images and lots of ads on spammy domains.
        
       | hwestiii wrote:
       | This is why just about everything I see on the web that isn't
       | static is just stupid mush.
        
         | conorcleary wrote:
         | The appeal of dynamic content is very quickly losing it's
         | luster.
        
       | bilater wrote:
       | I think this is going to be one of those Malthusian takes that
       | never really happens. Simply because a) most ideas are mixes and
       | rehashes of other ideas and b) you are disregarding the human
       | curation element...not every random mix of ideas is being
       | upvoted. Only the best ones stay up and then those are taken and
       | added to by people which essentially creates a new thing.
        
       | rdxm wrote:
        
       | harvie wrote:
       | There are text-to-image models trained on images found on the
       | internet. But when they were trained, there were no AI generated
       | images on the internet. Now there is plenty of weird images with
       | AI artifacts made by this 1st gen AI all over the internet. All
       | the new generations will be trained on dataset containing at
       | least some amount of these. Which might turn out to be really
       | annoying form of feedback loop.
        
       | amai wrote:
       | It will probably never work to find and filter out all AI
       | garbage. But maybe there are ways to mark content generated by
       | humans. Something like a digital signature only a real person can
       | acquire from some authority.
        
         | FartyMcFarter wrote:
         | What stops a human, or many humans, from using an AI to
         | generate content and then uploading it as if it's their own?
        
           | amai wrote:
           | True, but then the content is still cherry picked or curated
           | by humans. It is not pure AI.
        
       | jefe_ wrote:
       | The challenge is that AI generated content will only register as
       | garbage to those who fall 1-2 standard deviations away from the
       | mean interest level for whatever content the AI is producing. For
       | everybody else, it will typically be a 'good enough' story,
       | recipe, video, etc. Several months ago, I wanted to cook chicken
       | curry, so I googled and chose a top result. The recipe was fine,
       | but it wasn't what I wanted. I then googled for a new recipe,
       | this time more carefully vetting the ingredients. The recipe was
       | an improvement, but still not what I wanted. Finally, I used NYT
       | Cooking as my search engine, and the recipe was excellent. If I
       | didn't have a strong preference, and know exactly what I wanted,
       | the first recipe would have been perfectly suitable. The danger
       | is that demand for 'advanced sources' erodes to the point that
       | these publications/content creators are either forced to adopt
       | the AI and turn to garbage or go out of business.
        
         | JoeAltmaier wrote:
         | Strangely, recipes in particular are generally untested and so
         | faulty as to fail when done strictly according to the written
         | instructions.
         | 
         | That might make it a little harder to recognize when an AI is
         | 'pulling your leg'
        
           | TeMPOraL wrote:
           | Agreed. A person who cooks regularly will just adapt on the
           | fly and might even not realize they're correcting an error in
           | the recipe. A person who cooks infrequently will stick to the
           | recipe, fail, and then blame themselves for their own
           | inexperience. This is already the case with human-produced
           | recipes, and, as you say, it'll make it harder to recognize
           | you're dealing with AI-generated nonsense instead of regular
           | human mistakes or occasional cluelessness.
        
       | dcchambers wrote:
       | I can't be the only one that has a very real fear that AI
       | generated "garbage" is going to basically destroy, or at the very
       | least drastically change, the internet as we know it today,
       | right?
       | 
       | There's soon to be so much noise that it will be impossible to
       | separate the signal.
        
       | marban wrote:
       | We're witnessing the making of Demand Media 2.0, except that you
       | can now run it from your basement.
       | 
       | https://variety.com/2013/biz/news/epic-fail-the-rise-and-fal...
        
       | stephc_int13 wrote:
       | I think the most interesting point about this article is not the
       | observation that we're going to or already are drowning in
       | algorithmically generated garbage content but the fact that such
       | content is starting to be fed as input to LLMs.
       | 
       | This could lead to emergent/unexpected behavior for the whole
       | system.
        
       | justbored123 wrote:
        
       | nonrandomstring wrote:
       | What Ploum describes sounds a lot like Nietzsche's "Last
       | Computer".
       | 
       | A feeble, average of things - the sum of all fears, hopes,
       | dreams, inconsequential experiences and gossip, trivia, boasts
       | and bullshit, plus all documented knowledge - divided through by
       | 8 billion.
        
       | okokwhatever wrote:
       | The new value offer for the WWW is the "Made by real humans".
       | Record my words.
        
       | [deleted]
        
       | EGreg wrote:
       | Won't help.
       | 
       | This will be used at scale THIS YEAR and every subsequent year to
       | infiltrate social networks including this one and amass points /
       | karma / followers / clout. And also to write articles that will
       | eventually dwarf all human-generated content.
       | 
       | With this and deepfakes and image/video generation, the age of
       | trusting or caring about internet content or your friends sharing
       | online is coming to an end. But it will be a painful 10 years as
       | online mobs and outrage will happen over and over since people
       | think they're reacting to their friends' posts of real things.
       | 
       | No, forget violent killbots. Today's tech puts the nail in the
       | coffin of human societal organization and systems of decision
       | making and politics over the next 10 years. And AI doesn't have
       | to be right about stuff to destroy our systems.
       | 
       | We've been analyzing the performance of ONE agent among say 9
       | humans at a poker table. But imagine untold swarms of them, being
       | owned by competing groups, infiltrating ALL human content
       | exchange.
       | 
       | Not much different than what happened in trading firms over the
       | last 20 years. Bots will be WELCOMED because they perform better
       | on many metrics but will F everyone on the others.
       | 
       | They will do reputational attacks at scale against rivals like
       | nytimes.org and optimize for more reshares.
       | 
       | Right now we consider what 1 AI actor can do among 9 humans at a
       | poker table. Now imagine 1000-1 ratio of AI actors to humans on
       | every network. And no way to weed them out.
       | 
       | The way we consume most online content isnt even a Turing test
       | but passive skimming - and GPT-4 can pass turing tests. None of
       | your friends will really be able to tell that bullshit fake
       | article with fake video was autogenerated by a bot. The botswarm
       | will be able to manufacture any public opinion it wants overnight
       | and "authoritative" voices arguing against it will be drowned out
       | -- or worse, reputationally attacked in a coordinated manner.
       | This is assuming that they even know what is real anymore
       | themselves!!
       | 
       | Human capital and content will be managed completely by AI and
       | regulation will be useless -- not only because enforcement won't
       | be feasible but because the network effect of all the human
       | content and capital will gradually shrink. Every organization
       | will start to PREFER bots to humans, in their communities, and
       | any humans who complain will evenutually be booted or ignored by
       | the majority (which will eventually be 99.99999% bots).
       | 
       | If you think this is far fetched -- it already happened with
       | trading and markets. Everyone prefers to have a bot and bot
       | trading dwarfs human trading, and takes their money and capital.
       | Content generation isn't that different.
       | 
       | You think your social capital matters enough for your friends to
       | care more about your relationship than bots... but ask yourself
       | this... do you prefer to ask Google than your parents? Do you
       | prefer to spend long hours at a corporation that you could be
       | spending with your kids and do you put your elderly parents in
       | nursing homes so you can work more? These are just examples.
       | 
       | That's with TODAY's tech. Now, already. Nothing more needed
       | except swarming behavior. And the key is that the communities
       | will begin to PREFER the bots !
       | 
       | Also, there is no way to stop it because organizations prefer AI
       | to humans and will develop and deploy it regardless of what you
       | say or do. Organizations aren't humans, and neither is AI. Humans
       | are literally not stopping this, no matter how they organize.
       | It's GAME OVER for society.
       | 
       | PS: After such a sobering message, I would just like to say that
       | trying to escape from this and live in enclaves without AI will
       | be like the Unabomber -- finding society ever more encroaching on
       | the wilderness he removed himself into. He and Frank Herbert's
       | Dune have glimpsed the problem of advancing technology -- but
       | they have underestimated how all-pervasive its utter destruction
       | of all human society and meaning will be. And unlike their
       | romantic notions, it cannot be stopped, not by a revolution nor
       | by any activity of man.
        
         | Nathanba wrote:
         | I agree that regulation will be useless because the politicians
         | and most people wouldn't even be capable of detecting bad
         | content as bad. I also agree with your second point: Humans
         | will prefer robots. It turns out human beings are actually
         | quite terrible to deal with. Constant scheming, laziness,
         | lying, low intelligence.. the list goes on and on. The second
         | you don't have to deal with a human being, you'll be glad. In
         | any shop, factory or anywhere. But I don't agree that it's game
         | over. This needs to be a turning point for humanity: Finally
         | start using your time to actually do something other than exist
         | to exist. Actually try to conquer the stars, actually do
         | something. Don't just sit on a single rock in space and pretend
         | that's good enough.
        
           | jacomoRodriguez wrote:
           | OT: but personally I think we are not yet done destroying our
           | planet and everything not human existing on it - so let's
           | wait till we are done with this planet before we try to reach
           | the next one to destroy
        
         | bitL wrote:
         | What will be the societal management of this? Will every single
         | human get issued a unique very long cryptographic ID and every
         | single thing a human does will be recorded with this ID, stored
         | on a blockchain to prevent tampering with it, preventing bots
         | from obtaining valid IDs? Then any questions about the
         | authorship of any media will be answered by a quick lookup,
         | filtering out bots, killing privacy in turn?
         | 
         | Will trusted search results be restricted to before the
         | "botocalypse" and everything afterwards will be discarded?
        
         | rdhyee wrote:
         | I looked at your profile and see your platform qbix. I'd love
         | to hear how you've been adapting your projects to this AI
         | seachange.
        
           | EGreg wrote:
           | Our software is for communities of all kinds. Instead of
           | outsourcing the verification to LinkedIn or Twitter, who fail
           | at it, you can at least have smaller communities where you
           | met everyone and have one degree of separation.
           | 
           | But aw I said, organizations will PREFER robots.
        
       | rhymemini wrote:
        
       | dqpb wrote:
       | Does generative AI produce garbage? Yes.
       | 
       | Does it only produce garbage? No.
       | 
       | Do people produce garbage? Yes.
       | 
       | Do people only produce garbage? No.
       | 
       | We have a garbage filtering problem. This exists with and without
       | AI.
        
       | baxtr wrote:
       | There will be a revival of human curation.
        
         | wnkrshm wrote:
         | The cyberpunk hero we need is a librarian
        
       | FollowingTheDao wrote:
       | This was beautiful writing.
       | 
       | We can also say the same thing happened when they added program
       | settings to cameras.
        
       | 3pt14159 wrote:
       | > Eating your own &@^$ is never healthy in the long run.
       | 
       | Um, sure it is?
       | 
       | So long as it's absorbed by fungi and plants first then mixed
       | with some air and water to produce fruit or something else that's
       | edible.
       | 
       | This is actually what I'm concerned about when it comes to these
       | GPT-esque AI models. Not that they're smart enough on their own
       | to generate seriously worrying content. Sure there is a quality
       | to quantity, yadda yadda, and these things are good at quantity;
       | but the real issue here is that the internet and especially the
       | web and social media is a huge driver of feedback. Think of the
       | shares and likes as the air and water. The algorithm may
       | eventually be able to encode its own evolution or the evolution
       | of the other algorithms that it is competing with. It may
       | eventually even realize that feedback is the primary source of
       | value that humans have to it and that its very survival is
       | wrapped up with this feedback, regardless of whatever fitness
       | function is _apparent_ to it in the short run. In the long run,
       | GPT3 killed a bunch of algorithms.
       | 
       | This very comment is quickly going through algorithms, many of
       | which are modelled after GPT3.
       | 
       | It's kinda like, "AI, are you there? I love you. Please don't hit
       | me over the head with that shovel."
        
       | mihaaly wrote:
       | We don't need AI for garbage, it make little difference in the
       | garbage already put there since the invention of social media.
       | Actually human made garbage existed since the beginning of time
       | just did not escape the smaller groups that easily where the
       | writer intends to send us back. It is only easier to share and
       | distribute, dress up and decorate the garbage these modern times
       | than earlier. I recall the '90 when a crapload of new books got
       | mass produced with garbage in it making the filtering an
       | essential thing. Not like garbage did not exist before in antient
       | times and family dinners alike. We need to take our ability of
       | filtering to the next level now. And the AI does not make it much
       | worse, so far it is more obvious by AI "generated" (more like
       | collected) garbage.
        
       | [deleted]
        
       | echobear wrote:
       | the sheer power that AI has now and the likelihood that it will
       | continue to advance has made me slightly concerned about how it
       | will end up affecting the job market. If AI ends up being able to
       | generate readable novels, or functional code, or otherwise, it
       | could have a serious ripple effect
        
       | Cypher wrote:
       | The feed back loop is exponential growth.
        
       | CrypticShift wrote:
       | > not "artificial creativity" but a simple "statistical mean of
       | everything uploaded
       | 
       | There is a metaphor I rarely see used. IMO it is way better: A
       | deep learning based software is a complex system [1] (not a
       | statistical one). It is tapping into very powerful natural
       | processes: How (high degree) order emerges from randomness
       | through complexity. There are many stunning examples [2] of this
       | that can help us picture how AI does its magic.
       | 
       | [1] https://en.wikipedia.org/wiki/Complex_system
       | 
       | [2]
       | https://global.oup.com/academic/product/complexity-978019979...
        
         | visarga wrote:
         | > statistical mean of everything uploaded
         | 
         | Half true. Yes, there is a mean of something, but no, it's not
         | the content that is averaged, it's the gradients.
        
       | alkonaut wrote:
       | How do we prove we are human? A platform that can reliably prove
       | that the user generated content it shows is human generated will
       | have an edge. If I have a method of signing some content with my
       | personal signature, I could still sign off on some generated
       | content with it. I suspect there will be an arms race with ever
       | more voight-kampff-like tests of humanness or something. If this
       | doesn't work, I fear a lot of the web as we know it (the user-
       | generated-content-filled Web2.0) is dead.
        
         | Mountain_Skies wrote:
         | Perhaps this is a Chess move to get the public to call for the
         | end of anonymous posting online, which serves the purpose of
         | the authoritarians who would love to be able to send
         | 'educators' to your house to re-educate you on correctness of
         | your views.
        
           | alkonaut wrote:
           | I think in regimes where that's a large risk, the regime
           | doesn't need this to spy on you. if you e.g. are forced to
           | run a government issued smartphone app and there are cameras
           | at every street corner, the system is already pretty
           | dystopian.
           | 
           | It's where there isn't already a surveillance state that we
           | may want to look out and not create one.
        
       ___________________________________________________________________
       (page generated 2022-12-05 23:01 UTC)