[HN Gopher] NSA Publishes Guidance for Strengthening AI System S...
___________________________________________________________________
NSA Publishes Guidance for Strengthening AI System Security
Author : bookofjoe
Score : 48 points
Date : 2024-04-16 17:32 UTC (5 hours ago)
(HTM) web link (www.nsa.gov)
(TXT) w3m dump (www.nsa.gov)
| haolez wrote:
| Pretty sound advice. I was hoping to find things like "make sure
| your model is aligned", but it's actually a lot of good advice
| regarding IT infrastructure in general, plus some AI bits.
| Kerbonut wrote:
| Pretty lite reading. I was expecting some actual useful things
| beyond secure your system 101. Closest we got was check for
| jailbreak attacks... seriously? Why not design for jailbreak in
| mind so it doesn't matter what they can get the AI to attempt to
| do. I.e., if the user tries to get the AI to unlock a door, if
| the user doesn't already have authorization for that function
| then it shouldn't work even if the AI attempts it on their
| behalf, and conversely, if they have the authorization then who
| cares if they coaxed the AI to do it for them?
| ipython wrote:
| This is exactly the advice I give my customers - treat the llm
| as an untrusted entity. Implement authentication and
| authorization at the data access and api layer and ensure there
| is a secure side channel to communicate identity information to
| backend resources.
| latchkey wrote:
| I find it difficult to understand how we can "Secure the
| deployment environment" and "Ensure a robust deployment
| environment architecture" without talking about the elephaNt in
| the room.
|
| My feeling is that we need to stop relying on a single provider
| for compute and software. That we should be focused on not
| complaining about how far AMD is behind and work towards catching
| them up. That we should be fostering innovation in third parties.
|
| It is surprising to me that the status quo is acceptable to the
| US govt.
| tgsovlerkhgsel wrote:
| I just skimmed it and none of this looks AI-specific. It looks
| like someone essentially ran the LLM version of s/software/AI/
| and s/binary/model/ on some generic "how to secure your software
| deployment" manual...
___________________________________________________________________
(page generated 2024-04-16 23:00 UTC)