[HN Gopher] On-silicon real-time AI compute governance from Nvid...
___________________________________________________________________
On-silicon real-time AI compute governance from Nvidia, Intel, EQTY
Labs
Author : kfrzcode
Score : 17 points
Date : 2024-12-18 19:39 UTC (3 hours ago)
(HTM) web link (www.eqtylab.io)
(TXT) w3m dump (www.eqtylab.io)
| vouaobrasil wrote:
| Verifiable compute doesn't do much good if the people doing the
| verifying and securing are making wild profits at the expense of
| the rest of us. This technology is more about making sure nothing
| horrible happens in enterprises rather than protecting people
| from AI, even if "safety" is claimed.
| malwrar wrote:
| I'm trying to decide if I should be concerned about the safety of
| general-purpose computing with such technologies sneaking into
| our compute. Verifying compute workloads is one thing, but I
| can't find information on what kind of regulatory compliance
| controls this addition enables. I assume it is mostly just
| operation counting and other audit logging discussed in AI safety
| whitepapers, but even that feels disturbing to me.
|
| Also, bold claim: silicon fabrication scarcity is artificial and
| will be remedied shortly after Taiwan is invaded by China and the
| world suddenly realizes it needs to (and can profit from)
| acquiring this capability. Regulatory approaches based on
| hardware factors will probably fail in the face of global
| competition on compute hardware.
| ramoz wrote:
| Reads as compliance controls being embedded into the code with
| integrated gates to halt execution, or verify controls are met
| at runtime - providing receipts with computed outputs. This is
| generally oriented toward multi-party, confidential, sensitive
| computing domains. As AI threat models develop, general
| compliance of things during training, or benchmarking, etc
| become more relevant as security posture requires.
|
| ref:
|
| https://openai.com/index/reimagining-secure-infrastructure-f...
|
| https://security.apple.com/blog/private-cloud-compute/
|
| https://arxiv.org/html/2409.03720v2
| malwrar wrote:
| Thanks for the reading.
| bee_rider wrote:
| I don't know what dialect this is written in, but can anybody
| translate it to engineer? What type of problem are they trying to
| solve and how are they going about it? (Is this DRM for AIs?)
| jasonsb wrote:
| Had to use bullshitremover dot com for this one. This is the
| translation:
|
| > Blah blah, AI is the future, trust it, crypto, governance,
| auditing, safer AI.
| ben_w wrote:
| Allow me:
|
| "Are you running the AI that you thought you were running, or a
| rip-off clone that will sneakily insert adverts for Acme, your
| one-stop-shop for _roadrunner_ -related explosives, traps, and
| fake walls, into 1% of outputs? Here's how you can be sure."
| lawlessone wrote:
| >Verifiable Compute represents a significant leap forward in
| ensuring that AI is explainable, accountable.
|
| This is like saying the speedometer on a car prevents speeding.
| ramoz wrote:
| It's a poor comparison.
|
| Regardless it's about a trusted observation - in your metaphor
| to help you prove in court that you weren't actually speeding.
|
| Apple deploys verifiable compute in Private Cloud to ensure
| transparency as a measure of trust, and surely as a method of
| prevention whether a direct method or not (depends on how they
| utilize verifiability measures as execution gates or not).
| moffkalast wrote:
| No this is like saying adding a speedometer to your car makes
| it, as an inanimate object, personally liable for going fast if
| you press on the accelerator.
| transfire wrote:
| Gearing up to put a hefty price on AGI. You can only run it if
| you have a very costly certificate which probably requires
| detailed security clearances as well.
___________________________________________________________________
(page generated 2024-12-18 23:00 UTC)