Post AkJr86RRvPLAcUiqYq by aallan@mastodon.social
(DIR) More posts by aallan@mastodon.social
(DIR) Post #AkJr86RRvPLAcUiqYq by aallan@mastodon.social
2024-07-26T09:50:30Z
0 likes, 1 repeats
Yes, protecting the model against prompt injection is good. But if they cared about safety then they'd make it *easy* for you to tell you're talking to an LLM, not harder. https://www.theverge.com/2024/7/19/24201414/openai-chatgpt-gpt-4o-prompt-injection-instruction-hierarchy