Post AQrAD67BRTawIUXuOu by edk@mastodon.social
(DIR) More posts by edk@mastodon.social
(DIR) Post #AQrAD1KTEeFhSFiwuu by edk@mastodon.social
2022-12-10T18:36:21Z
0 likes, 1 repeats
I think — unfortunately? — we’re going to keep having this same conversation for years.1. Yes the large language models are impressive.2. Yes there will be some clever uses of them.3. No, the biggest uses aren’t obvious now and won’t be for some time. These big uses are likely to be largely invisible. Things will just work better.4. Yes there are a LOT of problems with them at present.5. No, it’s not clear they’re a killer tech.6. The biggest problem is how good they are at fooling people.
(DIR) Post #AQrAD3ieLimWrp3Hsm by edk@mastodon.social
2022-12-10T18:41:49Z
0 likes, 0 repeats
There is something almost tautological happening with the large language models & specifically the hysteria (positive & negative) they induce…The problem is that we as humans have been fooled by fluency for all of human history. LLMs are the equivalent of Prof. Harold Hill (without the char development). If they were human they’d be sociopath scammers. The human brain is sort of innately vulnerable to fluency, or there wouldn’t be entire genres of scammers. And now we have fluent machines.
(DIR) Post #AQrAD67BRTawIUXuOu by edk@mastodon.social
2022-12-10T21:05:01Z
0 likes, 0 repeats
And look - what does a scammer want?1. They want to get your attention and keep it for as long as possible.2. They want you to do what they tell you to do, quickly, impulsively.3. They want for you to not stop and question whether what they’re saying is RIGHT.4. They want to make it as hard as possible for you to fact check them. 5. They want to maximize emotional empathy to drown out rational thinking.We need to be thinking about this playbook when we interact with text-generating agents.