Post AVLVJ4C5FeBAGPP9ns by a32@social.tchncs.de
(DIR) More posts by a32@social.tchncs.de
(DIR) Post #AVLVJ4C5FeBAGPP9ns by a32@social.tchncs.de
2023-05-05T06:24:01Z
0 likes, 0 repeats
Trying to learn about prompt injections for LLMs.Simon Willison seems to be very informed about that topic.https://simonwillison.net/2023/Apr/14/worst-that-can-happen/Hope, I will understand it, too. 😬 @simon #LLM #PromptInjection #prompt #AI #chatGPT
(DIR) Post #AVLVJ4p4ufN4DLeIaG by a32@social.tchncs.de
2023-05-05T09:27:05Z
0 likes, 0 repeats
@simon Unfortunately, I don't even get the "hello world" example (with the pirate accent). Can someone explain it to me, please?
(DIR) Post #AVLVJ5PaiuZu2ajSUq by simon@fedi.simonwillison.net
2023-05-05T13:15:41Z
0 likes, 0 repeats
@a32 have you seen the rest of my series on it? I have a few other explanations in there that might make more sense https://simonwillison.net/series/prompt-injection/
(DIR) Post #AVQBH0ko2kmbisBidM by a32@social.tchncs.de
2023-05-07T19:24:38Z
0 likes, 0 repeats
@simon Thank you for your input, and for writing the articles in the first place.I think I got a litte closer to understanding it.Somehow I did not grok that it's not the LLM itself that is attacked, but the application.