Post AbPNBeYSTSBoimSWOG by peter@social.linss.com
 (DIR) More posts by peter@social.linss.com
 (DIR) Post #AbOWjz53N4h4ejBrXs by cstross@wandering.shop
       2023-11-02T11:26:51Z
       
       0 likes, 6 repeats
       
       REMINDER: ChatGPT, Stable Diffusion, and other large trained neural models are NOT "artificial intelligence", they're just stochastic parrots, remixing and regurgitating what they've been fed. There's no theory-of-mind involved, so no understanding: there's no "there" there. (A real live parrot exhibits more intelligence than this.)Don't call it AI; call it parrot-tech. That way you'll have a better perspective on what it can (and can't) do.
       
 (DIR) Post #AbOZJjb4cF5ey4LHTU by wyrmworksdale@dice.camp
       2023-11-02T11:41:00Z
       
       0 likes, 1 repeats
       
       @cstross I call it Complex Autocorrect. "It's like hitting the predictive text button on your phone keyboard, but a paragraph or page at a time instead of a single word." Same concept for images or anything else.
       
 (DIR) Post #AbOu9DPzUzbfJCz3yq by troed@masto.sangberg.se
       2023-11-02T15:52:36Z
       
       0 likes, 0 repeats
       
       @cstross Otoh it seems we're not any different ourselves. Our consciousness is just the replaying of the events that happen outside of our control.
       
 (DIR) Post #AbPGdd6kkPfifAtRom by smadin@better.boston
       2023-11-02T19:38:32Z
       
       0 likes, 2 repeats
       
       @cstross I prefer the term "Big-Data Statistical Models", it's more precise, and no one disagrees with me when I say it's wrong to generate BDSM text and images without all sources' and participants' consent
       
 (DIR) Post #AbPNBc0LxHiioQUFxA by foone@digipres.club
       2023-11-02T16:52:59Z
       
       0 likes, 0 repeats
       
       @cstross yes, this! I keep seeing people posting about how this thing or that "lied" or "made up", and I'm like IT'S NOT SMART ENOUGH TO LIE!It can't LIE, that requires too much of a model of reality, a model it doesn't have.
       
 (DIR) Post #AbPNBeYSTSBoimSWOG by peter@social.linss.com
       2023-11-02T21:18:05Z
       
       0 likes, 0 repeats
       
       @foone @cstross yeah, I also hate the line that it “sometimes hallucinates wrong answers”.“Hallucinates” isn’t an accurate term, but it would be better to say “it *always* hallucinates the answer, it just sometimes happens to be (mostly) right.”(And when it’s obviously wrong, people just laugh and move on rather than question the whole thing.)
       
 (DIR) Post #AbQxugHCi71WbH7PrE by freemo@qoto.org
       2023-11-03T15:44:17Z
       
       0 likes, 0 repeats
       
       @cstross As an expert on this topic I'd like to say what your saying is more or less correct.LLM are, as you say parots, but despite the minimizing language that is still hugely powerful. What makes it so powerful is when you ask it to parrot back on some topic you can do so by telling it what audience to limit itself to. So by asking it to "speak as a scientist" then it parots back what your typical scientist might say on the subject.While this has limitations, and it certainly isnt AI in the sense that it is human-like, it is a vastly powerful tool within the context of what it can do.