Post ATrVaIaEj4HOtAIlEm by simon@fedi.simonwillison.net
(DIR) More posts by simon@fedi.simonwillison.net
(DIR) Post #ATrROsnZqy7TusqJ0q by simon@fedi.simonwillison.net
2023-03-22T03:18:42Z
1 likes, 0 repeats
I blogged about another common way you can get misinformation out of AI chatbots: asking them questions about themselves! https://simonwillison.net/2023/Mar/22/dont-trust-ai-to-talk-about-itself/
(DIR) Post #ATrU6RbOWwKwDa2lsW by caseyg@social.coop
2023-03-22T03:47:15Z
0 likes, 0 repeats
@simon Another weird case that feels somewhat related: why/how does GPT3.5 know the current date but not the current year? 🤪
(DIR) Post #ATrVaIaEj4HOtAIlEm by simon@fedi.simonwillison.net
2023-03-22T04:04:10Z
0 likes, 0 repeats
@caseyg Hah, that is weird! GPT4 seems to get that right, I think it's more reliable at switching to a chain-of-thought approach to solving that kind of question
(DIR) Post #ATsi4vZZipsf6jK7kG by neilk@xoxo.zone
2023-03-22T18:00:04Z
0 likes, 0 repeats
@simon We definitely shouldn’t trust AI self-knowledge. But as for the other component of your argument: Gmail does autocomplete too; surely that was trained on their data? The argument that tech companies wouldn’t do dumb or unethical things with AI is out the window now. They are scared, existentially, of OpenAI, and we’ve seen high profile resignations at Google, specifically related to AI ethics.
(DIR) Post #ATsoLtgqZh0DusCSDg by simon@fedi.simonwillison.net
2023-03-22T19:10:29Z
0 likes, 0 repeats
@neilk I've not dug into gmail autocomplete much myself - I assumed that it was mainly trained on a small controlled set of shared examples, and then boosted by tiny custom per-user models built out of that user's individual content, like mobile predictive keyboards