[HN Gopher] NIST AI Risk Management Framework
___________________________________________________________________
NIST AI Risk Management Framework
Author : ftxbro
Score : 45 points
Date : 2023-04-09 21:08 UTC (1 hours ago)
(HTM) web link (www.nist.gov)
(TXT) w3m dump (www.nist.gov)
| mitthrowaway2 wrote:
| Nothing in here about existential risk, as far as I can tell.
| ilaksh wrote:
| Does the NIST actually say anything here? Looks like mainly meta-
| level BS frameworks with no teeth.
| hrpnk wrote:
| I am generally frightened by the readiness of people to send data
| to solutions like ChatGPT or by running open source projects that
| result in running "AI agents" that execute arbitrary code on
| one's machine. What can we do to drive more awareness about
| running these experiments in a responsible way, like a sandboxed
| environment?
| ilaksh wrote:
| In my opinion this type of comment is the core problem with AI
| alignment these days.
|
| The current GPT models are not going to "wake up" or
| accidentally take over the world. This is very obvious.
|
| Forcing everyone who is conscientious to copy and paste the
| output of these relatively weak models is not going to save us
| from anything.
|
| If you want people to take AI regulations seriously, you have
| to do it in a reasonable way. Which means understanding the
| nuance of the dangers.
|
| Executing "arbitrary code" that InstructGPT models outputs
| based on your instructions is not a danger.
|
| People need to be able to distinguish different levels of
| autonomy and speed.
|
| It's when we get to high levels of autonomy and performance
| that we run into danger. Which we are really at the cusp.
|
| But when you fail to differentiate between systems that are
| obviously not dangerous and future/more autonomous agents that
| possibly are, that makes it impossible to take the concerns
| seriously. And actually harder for people to understand the
| problems with full autonomy and superintelligent models.
| ekidd wrote:
| > _The current GPT models are not going to "wake up" or
| accidentally take over the world. This is very obvious._
|
| It's pretty obvious that GPT-4 isn't consistently coherent
| enough to carry out complex plans or to do serious scientific
| research. There are several other major shortcomings which
| also prevent this from being viable, too. And of course, it's
| possible that some of these shortcomings are _very_ hard to
| "fix." We've had AI winters before, and self-driving seems to
| have stalled.
|
| However, I strongly suspect that it's theoretically
| _possible_ to build a machine that "thinks" as well as a
| human. And I no longer have any solid idea how far we are
| from that point, especially if we make certain architectural
| changes. We _might_ be closer than we think. Or at least we
| should stop assuming that next-gen models are _obviously_
| safe.
|
| nVidia's hope to increase training performance 1 million
| times in 10 years seems potentially dangerous to me.
| Anthrophic's plan to spend over $1 billion training next gen
| models seems awfully sketchy as well.
| ben_w wrote:
| If they weren't paying attention to Black Mirror, the BSG
| reboot, the Terminator franchise, the Replicator arc of
| Stargate, every episode of Star Trek where the
| computer/Data/the holodeck/nanites went wrong, Ex-Machina, The
| Matrix franchise, Age of Ultron, basically any Asimov AI story,
| Blade Runner, 2001, or any folk tale about being careful what
| you ask of powers that take you literally from Midas to
| Fantasia...
|
| "Oh, they're just fictional!"
|
| ...then I suggest attaching a rusty metal spike to the
| keyboard.
|
| It won't _do_ anything, it 's easy to avoid, it'll just sit
| there looking dangerous.
|
| Hopefully that would be enough just by itself.
|
| And yes, this is a reference to a similar suggestion for making
| drivers pay more attention behind the wheel.
| mrob wrote:
| There is no sci-fi story that details a realistic scenario of
| AI existential risk, because sci-fi stories are written by
| and for humans. The potential danger of AI comes from the
| combination of superhuman abilities and profoundly alien
| mind. No human can predict any specific behavior of a
| superhuman intelligence, only that it will probably succeed
| in its goals, whatever those goals happen to be. And out of
| all possible goals, only a small proportion are compatible
| with life continuing to exist. This is especially true when
| you consider only simple goals, and simple things are
| generally easier to make.
|
| Sci-fi is a distraction. There can be no heroic human
| resistance like in the stories. If an AI is intelligent
| enough to be dangerous, it's intelligent enough to conceal
| its intentions until it's too late. The only way to beat a
| superhuman AI is by not making it in the first place.
| mitthrowaway2 wrote:
| Yes, an AI apocalypse is not depicted in any science
| fiction because it would make for terrible fiction. Good
| stories depict relatable characters engaged in a battle
| between near-equals that ends in victory after terrifying
| odds. Not "someone saying 'oops' and then everyone falling
| over dead", which is the Yudkowsky scenario.
| blibble wrote:
| maybe something like the BBC Threads film is the way to go?
|
| plot of threads: - nuclear war happens
| - all humanity suffers horribly - ends with scene
| of deformed stillborn child
|
| plot of artificial super intelligence: -
| tech company creates super-intelligence - idiot CEO
| enters badly thought out goal in an attempt to capture 2%
| more search market share - super-intelligence
| executes goal - all life on earth is casually
| exterminated by the AI as it attempts to reach its goal
| - ends with scene of 100% of the earth's surface converted
| to NVIDIA GPUs
| flangola7 wrote:
| "The sorcerer's apprentice mop scene, but there's no master
| sorcerer to save us and the mops are making more and better
| mops."
|
| One of the better non science fiction analogies I've heard.
| whatever1 wrote:
| eventually we will have our personal AI tuned on our own data
| over time. Maybe we will be upgrading the foundation model, but
| the tuning would be personal.
|
| At least I hope so.
| sixothree wrote:
| I fear the world will be structured such that we don't get
| that kind of liberty. Surveillance capitalism has made me
| more pessimistic. Or realistic.
| ben_w wrote:
| I don't see liberty in having an AI tuned to me. Rather, I
| see that as being the exact surveillance you're
| pessimistic/realistic about.
| atleastoptimal wrote:
| People are usually driven by basic operant conditioning, not
| safety-aware abstract principles.
|
| In the 2000s everyone was hyper aware of privacy issues,
| putting personal information online, safety, etc. However the
| last 15 years of people doing more and more risky stuff with
| allowing systems and computers to store and use their data and
| only getting benefits and more fun videos to watch, there's no
| internal shared concern to act in accordance to every possible
| safety principle.
|
| People just want the cool chatbot to do their work for them.
| Especially with the scale of FOMO with these things, no one is
| going to sit back and not be risky when they read that other
| people are saving many hours a day letting ChatGPT do their
| work for them. Until something really bad happens people aren't
| going to change their behavior.
| smitec wrote:
| This looks good as an overarching framework but will likely fall
| into the same bucket as a lot of regulation in cutting edge
| fields (if this ever becomes mandatory). These are only as good
| as the quality of the people doing the assessment.
|
| I've worked with a lot of people in the medical world while
| developing SaMD (software as a medical device) in the past that
| had little to no idea about software. They can apply the
| principles in the abstract but will likely not dig deep enough to
| catch some very major issues.
|
| In the medical world, things like post market surveillance and
| notification of adverse events help to at least create a public
| feedback loop here. I think we will need something similar in
| this space if we really want to see more than a surface level,
| checklist ticking exercise.
| ftxbro wrote:
| > software as a medical device
|
| I had a question about this earlier, if a doctor uses Google to
| look up something, then is Google being used in some legal or
| regulatory sense as a medical device?
___________________________________________________________________
(page generated 2023-04-09 23:00 UTC)