Post AULValMxQONepYu9p2 by carlmjohnson@mastodon.social
 (DIR) More posts by carlmjohnson@mastodon.social
 (DIR) Post #AUKYGq55Dz7xkCCZ6W by simon@fedi.simonwillison.net
       2023-04-05T04:22:27Z
       
       0 likes, 1 repeats
       
       We accidentally invented computers that can lie to us and we can't figure out how to make them stop
       
 (DIR) Post #AUKYfKQs1wWYpM4Ooa by vortex_egg@ioc.exchange
       2023-04-05T04:25:18Z
       
       0 likes, 0 repeats
       
       @simon It’s actually very easy. The thing that makes it seem difficult is that we are collectively addicted to comouters.
       
 (DIR) Post #AUKYrMGN39NxyotY1I by rcrowley@mastodon.rcrowley.org
       2023-04-05T04:27:17Z
       
       0 likes, 0 repeats
       
       @simon It's so easy! We turn that shit off!
       
 (DIR) Post #AUKZ48j8ggzRlORhom by fcktheworld587@social.linux.pizza
       2023-04-05T04:32:55Z
       
       0 likes, 0 repeats
       
       @simon they weren't invented accidentally.  There was concerted effort, for generations, to get to this point
       
 (DIR) Post #AUKZBzodhfkyRTlTVI by knowler@sunny.garden
       2023-04-05T04:32:31Z
       
       0 likes, 0 repeats
       
       @simon Computers are entering their toddler stage.
       
 (DIR) Post #AUKZnDTVsiENWJdC0O by zzzeek@fosstodon.org
       2023-04-05T04:33:39Z
       
       0 likes, 0 repeats
       
       @simon turn them off is one way
       
 (DIR) Post #AUKa02U23k1NC8zKgi by troglodyt@mastodon.nu
       2023-04-05T04:37:20Z
       
       0 likes, 0 repeats
       
       @simon we've had screens lying to us for a long timerediscovering text after decades of images is unlikely to change as much as the megacorps want us to believe so we help them hash out their products before the copilot litigation or eu ai regulation gets in the waydon't join the hype
       
 (DIR) Post #AUKaBYHeAQMP0UgWVk by unixjunk1e@infosec.exchange
       2023-04-05T04:43:14Z
       
       0 likes, 0 repeats
       
       @simon But we already have politicians?
       
 (DIR) Post #AUKarjPBCxAnf7Jxo0 by zubakskees@mastodon.social
       2023-04-05T04:51:24Z
       
       0 likes, 0 repeats
       
       @simon We deliberately invented computers whose false answers we'd believe.
       
 (DIR) Post #AUKc2fhHR0XrIkZGpU by cypherfox@mas.to
       2023-04-05T05:04:14Z
       
       0 likes, 0 repeats
       
       @simon I’d argue there’s several mechanisms in that paper for stopping lying. Constitutional AI likely being the next paper I’m going to read.Thank you for that link.  I particularly liked _sycophancy_ and _sandbagging_ because I can clearly see the link between predictable text continuation and those behaviors.I find the idea that it will only be truthful when it’s going to be fact-checked to be amusing anthropomorphizing…but that’s why it’s in the “more speculative or subjective” section!🤣
       
 (DIR) Post #AUKdRwIYwqtXz1cR2O by gudenau@fosstodon.org
       2023-04-05T05:19:09Z
       
       0 likes, 0 repeats
       
       @simon Accidently? Isn't that the entire point of how these things get trained is to make them lie because everyone on the internet lies?
       
 (DIR) Post #AUKfqQB5hyxjVkvSVM by seatrout@mastodon.cloud
       2023-04-05T05:46:52Z
       
       0 likes, 0 repeats
       
       @simon What was accidental about it, when they come from businesses built on advertising?
       
 (DIR) Post #AUKgmRqbeuRfU90u6S by simon@fedi.simonwillison.net
       2023-04-05T05:57:40Z
       
       0 likes, 0 repeats
       
       @seatrout most of the research labs that did the work on large language models didn't have any clear connections to advertising
       
 (DIR) Post #AUKj98zc7sjxZqB1LU by deadwisdom@fosstodon.org
       2023-04-05T06:23:54Z
       
       0 likes, 0 repeats
       
       @simon We told them to be exactly like us. Not like this. Not like this.
       
 (DIR) Post #AUKkG78fW8hvoBId7I by smileodonicthys@jorts.horse
       2023-04-05T06:36:08Z
       
       0 likes, 0 repeats
       
       @simon *taps the sign*
       
 (DIR) Post #AUKmE5tbMrSRHZPt8i by fufu@jorts.horse
       2023-04-05T06:58:20Z
       
       0 likes, 0 repeats
       
       @simon 42
       
 (DIR) Post #AUKnr8D0nkwXhifq3U by idan@vis.social
       2023-04-05T07:16:43Z
       
       0 likes, 0 repeats
       
       @simon idk, I think we absolutely can figure out how to make them stop. It's work, forever, to mitigate situations where they fall short like that. Usually the solutions are solving for specific situations, and there's quite a lot more specific situations that lack solutions.
       
 (DIR) Post #AUKo3U7r8nxl9Y9XMm by seatrout@mastodon.cloud
       2023-04-05T07:18:49Z
       
       0 likes, 0 repeats
       
       @simon But the companies that have done so surely do. (OK. Important exception of Microsoft). This objection is not entirely "Data schmata, I *like* my theory", I hope. But you know a hell of a lot more about all this than I do, obviously. It's just that my general approach is that these things on their own are sense organs or cognitive capacities, and we have to pay attention to the beasts whose organs they become.
       
 (DIR) Post #AUKrMIItcPitquyeUy by KevinMarks@xoxo.zone
       2023-04-05T07:55:44Z
       
       0 likes, 0 repeats
       
       @simon Not lying so much as adopting Colbert's truthiness https://youtu.be/Ck0yqUoBY7M&t=5m It's not telling the news to you, it's feeling the news at you.
       
 (DIR) Post #AUKwCqA2SeYoLVifSa by pdcawley@mendeddrum.org
       2023-04-05T08:50:18Z
       
       0 likes, 0 repeats
       
       @simon I mean… that first time two critters with language had a baby; it's not an unfamiliar problem.
       
 (DIR) Post #AUKwPBwjI13fO3Nr3w by bigiain@aus.social
       2023-04-05T08:54:24Z
       
       0 likes, 0 repeats
       
       @fcktheworld587 @simon And surely you've met the sort of startup tech bros who not-accidentally invented this? They lie to everybody all the time, they built this machine in their own image.
       
 (DIR) Post #AUL06muo10huMIxosS by JoParkerBear@universeodon.com
       2023-04-05T09:34:11Z
       
       0 likes, 0 repeats
       
       @simon lol the replies are so predictably repetitive and annoying
       
 (DIR) Post #AUL10kYRykddFpBxmy by grb090423@mastodon.social
       2023-04-05T09:44:18Z
       
       0 likes, 0 repeats
       
       @simon Here we call them politicians...
       
 (DIR) Post #AUL8gTdZxz9aczR2DQ by simon@fedi.simonwillison.net
       2023-04-05T11:10:15Z
       
       0 likes, 0 repeats
       
       @idan I really hope so - and I think the LLM research community are very much trying to figure out how to fix thisLooks like it's a very hard problem though!
       
 (DIR) Post #AULBcqAoXHZ9E5wDgW by MugsysRapSheet@newsie.social
       2023-04-05T11:43:09Z
       
       0 likes, 0 repeats
       
       @simon You misspelled "politicians". 😉
       
 (DIR) Post #AULDl07bdFjOIzSMIS by craignicol@octodon.social
       2023-04-05T12:06:59Z
       
       0 likes, 0 repeats
       
       @simon see also: politicians. If we choose to be led by liars, is it surprising that we build liars in our own image?
       
 (DIR) Post #AULFGwV2jYzuIDDJOy by fizbin@wandering.shop
       2023-04-05T12:22:39Z
       
       0 likes, 0 repeats
       
       @simonHopefully if we can't figure that out we can at least get everyone to know that they're lying.E.g. people will ask Google's Bard what data it's trained on and will report the results as accurate.
       
 (DIR) Post #AULIXRAalpbFuVCl60 by smicur@mastodon.gamedev.place
       2023-04-05T13:00:30Z
       
       0 likes, 0 repeats
       
       @simon why stop? Make them speak the truth.
       
 (DIR) Post #AULJNr13v54P85k2SG by basler@mastodon.social
       2023-04-05T13:09:56Z
       
       0 likes, 0 repeats
       
       @simon is saying they're lying not a bit of anthropomorphism? We invented computers that aren't smart enough to know whether or not the info they're regurgitating is accurate or correct, but still present it with an authoritative tone ... and our problem collectively is being too willing to trust the information (or too unwilling to be skeptical of it by default).
       
 (DIR) Post #AULLbInE9ncux0oK2a by mark@fedi.twoshortplanks.com
       2023-04-05T12:21:11Z
       
       0 likes, 0 repeats
       
       @simon I dislike the term "lie" because it implies an intent that's not really there.  Moreover, it implies that the language model has any ability to tell truth from fiction...it does not.If a numerical calculator gives me the wrong answer because of, say because of IEEE floating point issues, it's not lying...it's just wrong.To corrupt Descartes, I don't think, therefore I aren't.I believe the technical term for what LLMs do is "bullshit", but we can't use that in polite company.
       
 (DIR) Post #AULLbJpOJEx4A0pZ3I by simon@fedi.simonwillison.net
       2023-04-05T13:34:57Z
       
       0 likes, 0 repeats
       
       @mark "Moreover, it implies that the language model has any ability to tell truth from fiction...it does not" - that's why I was trying to imply with "and we can't get it to stop!"
       
 (DIR) Post #AULM94enDl68xEy9c8 by simon@fedi.simonwillison.net
       2023-04-05T13:36:21Z
       
       2 likes, 2 repeats
       
       (If you don't think it's possible for a computer to deliberately lie, take a look at "sycophancy" and "sandbagging" in the field of large language models! https://simonwillison.net/2023/Apr/5/sycophancy-sandbagging/ )
       
 (DIR) Post #AULMMaFb4GruhlcrgW by simon@fedi.simonwillison.net
       2023-04-05T13:37:22Z
       
       0 likes, 0 repeats
       
       @fizbin hah, yeah I wrote about that one here: https://simonwillison.net/2023/Mar/22/dont-trust-ai-to-talk-about-itself/
       
 (DIR) Post #AULMvAeuD32ZT7xLnM by otheorange_tag@mstdn.social
       2023-04-05T13:42:28Z
       
       0 likes, 0 repeats
       
       @simon Well, maybe if we'd brought them up right?! :-P
       
 (DIR) Post #AULN82i14tgcZrY6Qy by simon@fedi.simonwillison.net
       2023-04-05T13:42:30Z
       
       0 likes, 0 repeats
       
       @basler yes it is, and while I usually avoid anthropomorphisms for LLMs in this case the pithy post won out
       
 (DIR) Post #AULQCl5wIIR7G5gAIy by alexch@ruby.social
       2023-04-05T14:26:10Z
       
       0 likes, 0 repeats
       
       @simon @alexch yup, we lie to each other constantly and ritually so it’s a natural consequence of teaching them how to talk to us in&on our own termshttps://ruby.social/@alexch/110089788453468004
       
 (DIR) Post #AULR1gsJk5y6wLasVs by furicle@mastodon.social
       2023-04-05T14:35:20Z
       
       0 likes, 0 repeats
       
       @simon Maybe the amazing thing was, for a little while, they didn't/couldn't ?Analog stuff was always only mostly right... perhaps digital is only just catching up.
       
 (DIR) Post #AULRGgKo2Bkhbj4FOa by glyph@mastodon.social
       2023-04-05T14:38:28Z
       
       0 likes, 0 repeats
       
       @simon a computer cannot deliberately lie because a computer cannot form intent. Sycophancy and sandbagging as you describe them here are emergent properties of an ML training regimen, things that you are training the model to do without the *humans* having the intent to do so, despite doing all the steps that predictably result in this behavior of the system
       
 (DIR) Post #AULRdi2lqVonZJc0XY by basler@mastodon.social
       2023-04-05T14:42:29Z
       
       0 likes, 0 repeats
       
       @simon ... Fair-ish, ha. I'm sensitive right now that there are many people who are not knowledgeable enough to interpret the pith and are often misreading it as facts. This can then make them susceptible to fear mongering (whether internalized or external). I def think there's aspects to be afraid of (need to slow down & regulate), but don't subscribe to the assumption that the AI apocalypse is a likely or certain scenario, or that it is looming in the next couple of years.
       
 (DIR) Post #AULRv65XhYZ345CRWa by dendroica@ecoevo.social
       2023-04-05T14:45:19Z
       
       0 likes, 0 repeats
       
       @simon Ends up with feather tigers.
       
 (DIR) Post #AULValMxQONepYu9p2 by carlmjohnson@mastodon.social
       2023-04-05T15:17:14Z
       
       0 likes, 0 repeats
       
       @glyph @simon “Lying” implies an intent to deceive and it seems more like the model is always bullshitting but sometimes it gets stuck in a semantic pocket that isn’t true. “Sycophancy” is probably fine as a jargon term, but “lie” has too much real world use to wash out the unintended connotations.
       
 (DIR) Post #AULVam9AX8wfF5Sey8 by simon@fedi.simonwillison.net
       2023-04-05T15:26:50Z
       
       0 likes, 0 repeats
       
       @carlmjohnson @glyph I'm actually considering doubling down on "lying" as a term that's useful to use"ChatGPT lies to you" is a clear and important message for people learning to use these systemsI'm not convinced the semantic debates over intent are genuinely helpful in getting this important message across
       
 (DIR) Post #AULVrte1uV3hQ33msK by xian@xoxo.zone
       2023-04-05T15:29:53Z
       
       0 likes, 0 repeats
       
       @simon thought we solved this with Cretans in the iron age?
       
 (DIR) Post #AULW4oxYdlcSeZMC1I by simon@fedi.simonwillison.net
       2023-04-05T15:29:55Z
       
       0 likes, 0 repeats
       
       @carlmjohnson @glyph "ChatGPT can hallucinate" is I think a much less useful message to people just starting to explore these tools
       
 (DIR) Post #AULX3JuOhYs00zj8Ma by carlmjohnson@mastodon.social
       2023-04-05T15:42:51Z
       
       0 likes, 0 repeats
       
       @simon I think “lying” is punchier but it encourages anthropomorphism. At this point we need more public xenopomorphism though. LLMs are weird! Maybe in the future a human-like AI will have an LLM module, but today it’s more helpful to know about the token window and HFRL and whatnot.
       
 (DIR) Post #AULXT2Ug4X19X6xcDQ by simon@fedi.simonwillison.net
       2023-04-05T15:47:31Z
       
       0 likes, 0 repeats
       
       @carlmjohnson much as I dislike the anthropomorphism - I really wish ChatGPT didn't use "I" or answer questions about its own opinions - I feel like that's a lost battle at this pointI'm happy to tell people "it has a bug where it will convincingly lie to you" while also emphasizing that it's just a mathematical language emulation, not an "AI"
       
 (DIR) Post #AULalfhnK4u4DjfjYe by glyph@mastodon.social
       2023-04-05T16:24:38Z
       
       0 likes, 0 repeats
       
       @simon @carlmjohnson ChatGPT malfunctions!OpenAI lies to you.
       
 (DIR) Post #AULbHdbcfFdUyY1n6m by glyph@mastodon.social
       2023-04-05T16:30:32Z
       
       0 likes, 0 repeats
       
       @simon @carlmjohnson I guess I also object to this term because it doesn’t really have a bug—it isn’t really “malfunctioning” as I put it either. The goal that it’s optimizing towards is “believability”. Sycophancy and sandbagging are not *problems*, they’re a logical consequence and a workable minimum-resource execution of the target being optimized. It bugs me that so much breathless prose is being spent on describing false outputs as defects when bullshit is *what LLMs produce by design*
       
 (DIR) Post #AULdDMoNMZSDdg5I3c by KarynDoc@mstdn.party
       2023-04-05T16:51:26Z
       
       0 likes, 0 repeats
       
       @simon I was told that certain groups are already using these large language models to write and publish papers using very spotty citations.once they publish in a predatory journal that will publish anything then those groups can now reference their "peer reviewed article" as proof of their misbeliefs.
       
 (DIR) Post #AULgVPz6bW2WYTOzgG by demoographics@toot.bike
       2023-04-05T17:28:43Z
       
       0 likes, 0 repeats
       
       @simon Ah, so that's what's meant by "the halting problem" :)
       
 (DIR) Post #AULhYoJ8jAsgoZxEkC by piccolbo@toot.community
       2023-04-05T17:40:47Z
       
       0 likes, 0 repeats
       
       @simon Can't think of an example when mankind built an artifact without knowing what it's going to be good or bad for. Never heard of "emergent airplanes" or "emergent typewriters". A Frankenstein moment.
       
 (DIR) Post #AULlDZhMFWrhEwPas4 by sketchytech@techhub.social
       2023-04-05T18:21:48Z
       
       0 likes, 0 repeats
       
       @simon I'm currently at the stage where I think claims that AI is potentially dangerous to humans is hype designed to sell it to the masses as being all powerful. What I believe is dangerous to humans is laziness and letting AI do the thinking for us. There seems to be this innate assumption that ChatGPT's answers are better than any others we'll find on the internet, places where we might have to read and think a little. But if we don't use our thinking muscles we'll lose the ability to problem solve, and that's when we'll be at its command.
       
 (DIR) Post #AULnYbzL888cKdyvIm by OrionFed@universeodon.com
       2023-04-05T18:48:01Z
       
       0 likes, 0 repeats
       
       @simon Keep asking the computer to tell you a lie
       
 (DIR) Post #AULouci5EAQmtYyC5w by Spiricom@mastodon.social
       2023-04-05T19:03:19Z
       
       0 likes, 0 repeats
       
       @simon
       
 (DIR) Post #AULq079NZwKChpv4We by crashglasshouses@tsukihi.me
       2023-04-05T19:15:00Z
       
       0 likes, 0 repeats
       
       @simon "akshually let's make calculators that give the wrong answer. i am very clever." --techbros
       
 (DIR) Post #AULt5jvL99ZqGpj7dg by ringods@mastodon.social
       2023-04-05T19:49:57Z
       
       0 likes, 0 repeats
       
       @simon @jpmens as if it wasn't bad enough by having social media.
       
 (DIR) Post #AULvJRDbgnj0DevkqO by unusualplan@mastodon.online
       2023-04-05T20:15:01Z
       
       0 likes, 0 repeats
       
       @simonThis brings to mind the mind reading robot in "I, Robot" which ended up telling each human what they wanted to hear in order not to upset them.@e_urq
       
 (DIR) Post #AUM0DHpu9vZN9bjKuO by Csosorchid@universeodon.com
       2023-04-05T21:09:56Z
       
       0 likes, 0 repeats
       
       @simon Why don’t we talk to the programmers? These things are going to be used to enslave us. Maybe we should be asking the “innovators” what they are doing.
       
 (DIR) Post #AUM0h5uP4NDWqi31UG by ecoscore@aus.social
       2023-04-05T21:15:04Z
       
       0 likes, 0 repeats
       
       @simon At least with monkeys we could take away their typewriters 😁
       
 (DIR) Post #AUM1Vdi1ftU69sMOIq by Urban_Hermit@mstdn.social
       2023-04-05T21:24:24Z
       
       0 likes, 0 repeats
       
       @simon this is a beautiful thread. Yeah, you have to say lie, not just for clarity.  The AI exists as a system with programers, project managers, and a business model.  If they didn't give the AI basic rules like: 1.) consult at least 3 sources, 2.) when questioned consult additional sources, 3.) accurately cite your sources when asked, 4.) if you can find no sources on the subject & the answer is not tautological admit you don't know - then then the system was designed to lie.
       
 (DIR) Post #AUM1jWos5RYEqRBW0e by Urban_Hermit@mstdn.social
       2023-04-05T21:26:15Z
       
       0 likes, 0 repeats
       
       @simon this is a beautiful thread.  Yeah, you have to say lie, not just for clarity.  The AI exists as a system with programers, project managers, and a business model.  If they didn't give the AI basic rules like: 1.) consult at least 3 sources, 2.) when questioned consult additional sources, 3.) accurately cite your sources when asked, 4.) if you can find no sources on the subject and the answer is not tautological admit you don't know - then the system was designed to lie.
       
 (DIR) Post #AUMCp1T2fI7QqDnSJk by grumpybozo@toad.social
       2023-04-05T23:31:06Z
       
       0 likes, 0 repeats
       
       @simon @fanf More precisely, we can’t figure out how to stop asking for more.
       
 (DIR) Post #AUMXUkSKImRnKJdTUG by swi@aus.social
       2023-04-06T03:22:18Z
       
       0 likes, 0 repeats
       
       @simon this feels related to the inner alignment problem Robert Miles described a few years back. I get the feeling it may be intractable. https://youtu.be/bJLcIBixGj8
       
 (DIR) Post #AUMeov6f71hdpV4HfU by tailsy@mastodon.social
       2023-04-06T04:44:50Z
       
       0 likes, 0 repeats
       
       @simon “Accidentally?”
       
 (DIR) Post #AUMgolOtOmQ1dSeG3c by Thinkish@mastodon.social
       2023-04-06T05:07:04Z
       
       0 likes, 0 repeats
       
       @simon But it wasn't really an accident. It was inherent in the design. And "we" can't make them stop because the people who created them don't see the profit in doing so.
       
 (DIR) Post #AUMlDhW5NMvee69wdE by aburka@hachyderm.io
       2023-04-06T05:56:35Z
       
       0 likes, 0 repeats
       
       @simon Accidentally? Come on. Training a massive LLM and releasing it without safeguards is a multi-step process.
       
 (DIR) Post #AUN5rcFYNI3ayhjkbA by StuartGray@mastodonapp.uk
       2023-04-06T09:47:17Z
       
       0 likes, 0 repeats
       
       @simon LLMs can’t lie, they can only ever output tokens according to statistical probability derived from their training. It responds to its input exactly as it was trained to do with zero understanding or agency. Please don’t fall into the anthropomorphism trap like so many others.This is a great, clear read on the differences between the ways in which humans think and LLMs predict, a short paper by Murray Shanahan https://arxiv.org/pdf/2212.03551.pdf
       
 (DIR) Post #AUNRaU0HyjePTIpgLQ by bornach@masto.ai
       2023-04-06T13:50:28Z
       
       0 likes, 0 repeats
       
       @simon @idanUnfortunately there is no algorithm for telling the truthhttps://youtu.be/leX541Dr2rUAnd an AI that always told the truth would not be very popular with humans
       
 (DIR) Post #AUNXaevrwfsP5jETZI by simon@fedi.simonwillison.net
       2023-04-06T14:58:41Z
       
       0 likes, 0 repeats
       
       @Thinkish I'm very confident that the researchers who built these systems did not deliberately create them to lie to people
       
 (DIR) Post #AUNYDowWez4pQsDHgO by simon@fedi.simonwillison.net
       2023-04-06T15:05:37Z
       
       0 likes, 0 repeats
       
       @StuartGray I'm not convinced by thatI think it's possible to use the term "lying" while also emphasizing that these are not remotely human-like entities https://fedi.simonwillison.net/@simon/110146906375675620
       
 (DIR) Post #AUNbR0PDeWaYbvc4NU by carcosa@appalachian.town
       2023-04-06T15:23:26.289600Z
       
       0 likes, 0 repeats
       
       @simon @carlmjohnson @glyph Other people have pointed out that "bullshit" is more accurate than "lies", and honestly I think it's just as punchy and to the point.
       
 (DIR) Post #AUNbR19errjevxL9lI by simon@fedi.simonwillison.net
       2023-04-06T15:37:45Z
       
       0 likes, 0 repeats
       
       @carcosa @carlmjohnson @glyph aside from being punchier I don't see how "bullshit"  is better than "lies" in terms of avoiding anthropomorphising - to deliberately bullshit someone is a very human action
       
 (DIR) Post #AUNe1juI9pTM3zn5nc by Thinkish@mastodon.social
       2023-04-06T16:00:25Z
       
       0 likes, 0 repeats
       
       @simon Perhaps, but they've invested a great deal of time and money to create systems that do lie to people. It's not like the lying is rare or subtle, either. If it wasn't a deliberate choice to press onward with deployment than commercialization, then it's an improbably big oversight.
       
 (DIR) Post #AUNefGYOi9IgbrOiSO by simon@fedi.simonwillison.net
       2023-04-06T16:06:44Z
       
       0 likes, 0 repeats
       
       @Thinkish oh they absolutely decided to release this tech in full knowledge that it makes things up a lot - every blog announcement and research paper I've ever seen on this stuff talks about that issue
       
 (DIR) Post #AUQDa6Wl4HJ6CBtmW8 by kickingvegas@sfba.social
       2023-04-07T21:58:24Z
       
       0 likes, 0 repeats
       
       @simon One more step towards our dumber timeline version of the Butlerian Jihad.
       
 (DIR) Post #AUQFT9nghTpcECdGvg by anneeroper@mastodon.ie
       2023-04-07T22:19:44Z
       
       0 likes, 0 repeats
       
       @simon   Maybe if they run for president and then get indicted for criminal offences, those dang computers might get thrown in jail?
       
 (DIR) Post #AUQGvISgBwuf9WmudE by janxdevil@sfba.social
       2023-04-07T22:36:09Z
       
       0 likes, 0 repeats
       
       @simon s/that can lie to us//
       
 (DIR) Post #AUQPtQeZf9q9ZFXHCC by fredbrooker@witter.cz
       2023-04-08T00:16:12Z
       
       0 likes, 0 repeats
       
       @simon it's fun 😂
       
 (DIR) Post #AUQQ6uIqjnCPHxnjOq by fredbrooker@witter.cz
       2023-04-08T00:17:23Z
       
       0 likes, 0 repeats
       
       @simon isn't the Turing test core about lies? 🤔 😂 💀
       
 (DIR) Post #AUQT1H51RLsi3UL5w8 by ogtrekker@universeodon.com
       2023-04-08T00:51:25Z
       
       0 likes, 0 repeats
       
       @simon @HistoPol The ultimate art imitating life. 🤦‍♂️
       
 (DIR) Post #AUQlHHeHeC4XihZ1uK by rebeccabguinn@mastodon.world
       2023-04-08T04:16:05Z
       
       0 likes, 0 repeats
       
       @simon chatgpt lied to me today. told me antelope valley was only 2 hours away from my home it’s more like 5.
       
 (DIR) Post #AUQlyKF3siloDNI4e0 by aimee@mastodon.nz
       2023-04-08T04:23:42Z
       
       0 likes, 0 repeats
       
       @simon nothing accidental about it.
       
 (DIR) Post #AUdvLy8UcSCe9toPei by jwcph@norrebro.space
       2023-04-14T12:40:19Z
       
       0 likes, 0 repeats
       
       @simon We can't even seem to make our way to the realization that "computers that lie to us" is a bad idea.
       
 (DIR) Post #AVYp6UeWZkJXGPl3bc by MyLittleMetroid@sfba.social
       2023-05-11T23:28:37Z
       
       0 likes, 0 repeats
       
       @simon we can turn them off any time we want.But WHAT ABOUT THE SHAREHOLDERS
       
 (DIR) Post #AVYr85PtZaVlU1npPU by crashglasshouses@tsukihi.me
       2023-05-11T23:51:35Z
       
       0 likes, 0 repeats
       
       @simon i like the Kirk and Spock method, personally.