Post AUNuMYCuRzSd7pRcoK by 22@octodon.social
(DIR) More posts by 22@octodon.social
(DIR) Post #AUNuMYCuRzSd7pRcoK by 22@octodon.social
2023-04-04T21:40:31Z
0 likes, 0 repeats
Thinking about stable diffusion’s “latent image space” https://arstechnica.com/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/ and about @simon’s notes about how to get an LLM to avoid hallucinations:“Want them to work with specific facts? Paste those into the language model as part of your original prompt!” — https://simonwillison.net/2023/Apr/2/calculator-for-words/and reflecting how, all the times LLM gives you “the right answer”, it’s because your input prompt metaphorically put it in the latent word space which mapped to output that wound up being correct.That is, every fruitful interaction you’ve had with LLM was maybe a few input words away from a hallucination.I wonder if it’d be at all helpful for an LLM system to show you how slightly different input prompts (leading to slightly different coordinates in the word-embedded space) lead to different outputs, as a way to build your mental model of how to use the machine? If you can see some right and some wrong answers in the set of generated output, maybe you can more quickly learn what makes a prompt good?
(DIR) Post #AUNuMZ8gz9gK12TlsO by 22@octodon.social
2023-04-06T19:05:53Z
0 likes, 0 repeats
I also think it might have been @simon who was asked in an interview, “are humans large language models?” Can’t find the link now but I thought that was maybe a jokey question till I noticed how often I hallucinate wrong answers.I was reviewing my coworker’s code and was looking at a line that was supposed to remove duplicates (array.filter((x, i, a) => a.indexOf(x) === i) yes it made such an impression I can recreate it from scratch) and had an entire comment written up with thought experiments showing that it may be wrong, till I remembered I could hit a single button and start typing and test things myself—and of course the code worked and I had hallucinated an objection.There’s also the examples of me “checking” kids’ math homework and convincing myself that a sum of two large numbers was something it totally wasn’t.Remembering these foibles makes me take more seriously the question “am I an LLM?” because, reader, I know I bullshit and hallucinate a lot, but just, I often have mechanical aids (like Dev Consoles and calculators) that quickly debunk my mental illusions, so I forget the mountains of bullshit I dream up because they get instantly vaporized by easy reality checks. This probably isn’t a reason to be more forgiving of LLMs because even if bullshit generation is a big part of what’s going on between my ears, computers can do mechanical verification a lot better. I think there’s room for significant improvement here as LLMs combine text generation with “reality checks”.
(DIR) Post #AUNuMZfJ1tlleBjoi8 by simon@fedi.simonwillison.net
2023-04-06T19:10:43Z
0 likes, 0 repeats
@22 Yeah that came up in the KQED Forum panel I was on, I quoted my answer in full at the bottom of this post https://simonwillison.net/2023/Mar/7/kqed-forum/