Post AULsnnt5ff2xLq7ckS by stever@newsie.social
(DIR) More posts by stever@newsie.social
(DIR) Post #AULs6PDFXHE8N5rE48 by grammargirl@zirk.us
2023-04-05T19:40:51Z
0 likes, 0 repeats
Wow, this article has two examples of AI saying people committed serious crimes when they didn't. One, a mayor in Australia who has *not* spent time in prison for bribery, is threatening to sue OpenAI for defamation unless they make it stop saying such things about him.How do they even do that since it's not programmed? I imagine they can put in guard rails for that one name, but what do they do when it happens to 1000s of people?https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/
(DIR) Post #AULsIxB15HiNBSnQbA by Rudydoesbooks@zirk.us
2023-04-05T19:43:10Z
0 likes, 0 repeats
@grammargirl A friend asked CHatGPT to provide a bio of him, and ChatGPT came back with 100% made up stuff, of a person who has done stuff the internet knows about (like you)Not crimes, but still, in the face facts, it came up with facts untrue facts about him.
(DIR) Post #AULsMllhRTHixZm3Xc by stever@newsie.social
2023-04-05T19:43:50Z
0 likes, 0 repeats
@grammargirl you can guess what my modest, humble recommendation might be for how to deal with this.
(DIR) Post #AULsOe6ysfEVnAcgoi by grammargirl@zirk.us
2023-04-05T19:44:12Z
0 likes, 0 repeats
@Rudydoesbooks It's wild.
(DIR) Post #AULsQL0urBlr8lapRQ by grammargirl@zirk.us
2023-04-05T19:44:30Z
0 likes, 0 repeats
@stever Actually, no. Do tell. :)
(DIR) Post #AULsnmckMwNZQrT3dQ by stever@newsie.social
2023-04-05T19:48:36Z
0 likes, 0 repeats
@grammargirl I believe that OpenAI should be 100% liable for consequential damages and for all court costs necessary for plaintiffs to collect. Possibly establish a large (multibillion dollar) fund to guarantee these claims move swiftly. Personal liability for OpenAI officers. Similar for copyright violations. I do not believe that just because automation makes it easy to scrape licensed content at gigantic scale, that it suddenly becomes OK. Ditto for misinformation widely distributed.
(DIR) Post #AULsnnt5ff2xLq7ckS by stever@newsie.social
2023-04-05T19:48:36Z
0 likes, 0 repeats
@grammargirl I might add this applies to google not having a mechanism for taking down misinformation links. I know someone who was the target of harassment and fake accusations that got posted on a web site by their ex. That’s what came up first in googling their name, and there was no way to get google to remove it.
(DIR) Post #AULt6MvM91lNEB2BZw by wizzwizz4@fosstodon.org
2023-04-05T19:52:04Z
0 likes, 0 repeats
@grammargirl How do they make the computer system stop committing crimes on their behalf? Well, they could switch it off…
(DIR) Post #AULtGTLv6AyC3qHEtU by grammargirl@zirk.us
2023-04-05T19:53:55Z
0 likes, 0 repeats
@stever Wow. Yeah, and I agree they should be responsible and fix it. I just wonder how that is even possible from a tech standpoint.It's terrible if you have to wait until after it says something damaging about you (if you even hear about it), and then file some sort of report that requires human intervention to make it stop.i guess if they're held liable, they'll find a way!And good point that it's already a problem with search engines. This isn't new.
(DIR) Post #AULtSqCXyOD6Osn5Gq by leadore@toot.cafe
2023-04-05T19:56:07Z
0 likes, 0 repeats
@grammargirl I was just going to post about that, but now I can just boost yours instead. :)Also in the article: when the author asked Bing chat the exact same question that OpenAI had responded to with the false info, Bing chat repeated the same lies as though they were facts, citing as its source an article written about what OpenAI had said! It's not the first time I've seen reports of different AI bots getting lies from each other and stating them as truths.
(DIR) Post #AULtbkyUm6YMnRTAe0 by andrew@newsie.social
2023-04-05T19:57:45Z
0 likes, 0 repeats
@grammargirl Putting in guard rails to avoid libeling thousands of individual humans does seem difficult for those running the #AI machine.But also difficult: Being publicly and falsely accused of criminal behavior by a literally faceless author. I think the lack of human intent behind a horrific accusation would be scant comfort if I were on the receiving end. #libel
(DIR) Post #AULu73ncpqAO9oJOgC by grammargirl@zirk.us
2023-04-05T20:03:25Z
0 likes, 0 repeats
@leadore That part jumped out at me too. I know some people think a lot of these stories amount to a moral panic, but the more anecdotes I see, the more it feels like our information ecosphere is getting primed to go off the rails. (Which, yes, I know, is how a moral panic is supposed to make me feel.)
(DIR) Post #AULuB6mZgbSwZEkbOC by michaelgemar@mstdn.ca
2023-04-05T20:04:07Z
0 likes, 0 repeats
@grammargirl These kind of legal issues are the Achilles heel of LLM-based AI. Errors might be amusing until someone sues. And as you point out, it’s not clear how this kind of problem could even be corrected, since LLMs have no notion of what’s actually true.
(DIR) Post #AULxjmZX7SdfsrsNbE by johnlogic@sfba.social
2023-04-05T20:44:00Z
0 likes, 0 repeats
@grammargirl I asked #ChatGPT to give me a biography and supplied my name. It needed more detail, so I added "from Silicon Valley".It described someone with my name having joined DoorDash as CFO in 2018, and overseeing the company's IPO in 2020.After a quick web search, I asked ChatGPT if it knew Prabir Adarkar (without mentioning that this was who joined DoorDash as CFO at that time). Nothing. Not even adding mention of DoorDash.#ChatGPT is beyond broken; it's lame.
(DIR) Post #AULxmrAeThE110hhpo by druidjournal@mastodon.social
2023-04-05T20:44:31Z
0 likes, 0 repeats
@grammargirl It's a hard problem, but we're working on it.The short answer is that you have to train fact-checking models of some kind. The fact-checkers don't have to be LLMs; we have other kinds of models that can read text and more reliably find facts in them, especially if those facts are in a table or other structured data.Bing already provides citations and links in its answers. The reason most released bots are so shoddy about their facts is the insane pressure to rush them out.
(DIR) Post #AULy9ypIBICHQ1jtdA by grammargirl@zirk.us
2023-04-05T20:48:46Z
0 likes, 0 repeats
@druidjournal Thanks!
(DIR) Post #AULyi9m1z9f3DXViEq by efi@chitter.xyz
2023-04-05T20:54:56Z
0 likes, 0 repeats
@grammargirl even if there's guards, there's gonna be ways around it and the problem is about trust, not some technical arcana
(DIR) Post #AUM5b2q44Q7FMRx9ZA by shoq@mastodon.social
2023-04-05T22:12:06Z
0 likes, 0 repeats
@grammargirl Or millions.
(DIR) Post #AUM64k9Y6kak60ED20 by grvsmth@lingo.lol
2023-04-05T22:17:27Z
0 likes, 0 repeats
@grammargirl Just wild that the idea of ... shutting down these chatbots is not even discussed. If libel isn't a reason to shut them down, then what is?
(DIR) Post #AUM9JicmNBUFYrjUTQ by WordsByWesInk@mstdn.social
2023-04-05T22:53:46Z
0 likes, 0 repeats
@grammargirlStuff like this is part of why I think the text generators are useless for anything meaningful. If I have to check everything they spit out to make sure it's not pure BS and not plagiarized, what's the point? It hasn't saved any time.
(DIR) Post #AUMBOREGfx3Cy0e6oC by Seth@writing.exchange
2023-04-05T23:17:03Z
0 likes, 0 repeats
@grammargirl uh oh. Are we sure ChatGPT isn’t Skynet in disguise?
(DIR) Post #AUMKE1ETNJi71zqeZM by i_understand@mastodon.social
2023-04-06T00:56:01Z
0 likes, 0 repeats
@grammargirl there is no reason to think anything written by ChatGPT is the truth. It would be akin to believing a fortune teller or a mind reader.
(DIR) Post #AUMKqWKMDHjOTTxYdk by LairdJames@techhub.social
2023-04-06T01:02:57Z
0 likes, 0 repeats
@grammargirl below find gifted link to the article for those without a subscription...https://wapo.st/3GjizKo
(DIR) Post #AUMLwUOeP93CB6NUNk by ohunt@mastodon.social
2023-04-06T01:15:11Z
0 likes, 0 repeats
@grammargirl I'm waiting for someone to ask them how much they're paying the AI if it's sentient and so they can't control it.
(DIR) Post #AUMQpCh0KRLh0vJ9do by anne_twain@theblower.au
2023-04-06T02:09:54Z
0 likes, 0 repeats
@grammargirl Open AI can't tell whether an individual is the same person as someone else with the same name.
(DIR) Post #AUMRREZlm6dzrK3fjk by Balkingpoints@mastodon.online
2023-04-06T02:16:49Z
0 likes, 0 repeats
@grammargirl It might take a lawsuit, but IT staff can investigate how the chatbot got those factually wrong. All it does is draw from sources online of course. It's accepting propaganda, or can't fact check, or is transposing names, something like that.
(DIR) Post #AUOLAMuHKxxnMg95pw by silberspur@medibubble.org
2023-04-07T00:15:54Z
0 likes, 0 repeats
@grammargirl ChatGPT, being asked more sophisticated things such as: explain what makes a certain joke funny, produces almost only nonsense answers. It "behaves" like a student in a test who, not prepared at all and without any clue, starts babbling this and that, mixing together things he/she has heard somewhere but that don't fit - without saying anything meaningful. We should be very careful when it comes to trusting such machines and implementing them into our processes, into our life.