Subj : Prompt injection attacks might 'never be properly mitigated' UK N To : All From : TechnologyDaily Date : Tue Dec 09 2025 15:15:07 Prompt injection attacks might 'never be properly mitigated' UK NCSC warns Date: Tue, 09 Dec 2025 14:40:00 +0000 Description: Prompt injection and SQL injection are two entirely different beasts, with the former being more of a "confusable deputy". FULL STORY ======================================================================UKs NCSC warns prompt injection attacks may never be fully mitigated due to LLM design Unlike SQL injection, LLMs lack separation between instructions and data, making them inherently vulnerable Developers urged to treat LLMs as confusable deputies and design systems that limit compromised outputs Prompt injection attacks, meaning attempts to manipulate a large language model (LLM) by embedding hidden or malicious instructions inside user-provided content, might never be properly mitigated. This is according to the UKs National Cyber Security Centres (NCSC) Technical Director for Platforms Research, David C, who published the assessment in a blog assessing the technique. In the article, he argues that many compare prompt injection to SQL injection, which is inaccurate, since the former is fundamentally different and arguably more dangerous. The key difference between the two is the fact that LLMs dont enforce any real separation between instructions and data. Catch the price drop- Get 30% OFF for Enterprise and Business plans The Black Friday campaign offers 30% off for Enterprise and Business plans for a 1- or 2-year subscription. Its valid until December 10th, 2025. Customers must enter the promo code BLACKB2B-30 at checkout to redeem the offer. View Deal Inherently confusable deputies Whilst initially reported as command execution, the underlying issue has turned out to be more fundamental than classic client/server vulnerabilities, he writes. Current large language models ( LLMs ) simply do not enforce a security boundary between instructions and data inside a prompt." Prompt injection attacks are regularly reported in systems that use generative AI (genAI), and are the OWASPs #1 attack to consider when developing and securing generative AI and large language model applications. In classical vulnerabilities, data and instructions are handled differently, but LLMs operate purely on next-token prediction, meaning they cannot inherently distinguish user-supplied data from operational instructions. There's a good chance prompt injection will never be properly mitigated in the same way, he added. The NCSC official also argues that the industry is repeating the same mistakes it made in the early 2000s, when SQL injection was poorly understood, and thus widely exploited. But, SQL injection was ultimately better understood, and new safeguards became standard. For LLMs, developers should treat them as inherently confusable deputies, and thus design systems that limit the consequences of compromised outputs. If an application cannot tolerate residual risk, he warns, it may simply not be an appropriate use case for an LLM. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too. ====================================================================== Link to news story: https://www.techradar.com/pro/security/prompt-injection-attacks-might-never-be -properly-mitigated-uk-ncsc-warns --- Mystic BBS v1.12 A49 (Linux/64) * Origin: tqwNet Technology News (1337:1/100) .