[HN Gopher] Underwriting Superintelligence
___________________________________________________________________
Underwriting Superintelligence
Author : brdd
Score : 33 points
Date : 2025-07-15 19:13 UTC (3 hours ago)
(HTM) web link (underwriting-superintelligence.com)
(TXT) w3m dump (underwriting-superintelligence.com)
| brdd wrote:
| The "Incentive Flywheel" of AI: how insurance unlocks secure Al
| progress and enables faster AI adoption.
| xmprt wrote:
| This only works if there are negative consequences faced by the
| insured parties when things go wrong. If all the negative
| consequences are faced by society and there are no regulations
| that incur that burden on the companies building AI, then we'll
| have unchecked development.
| brdd wrote:
| We agree! Unchecked development could lead to disaster.
| Insurers can insist on adherence to best practices to
| incentivize safe practices. They can also clarify liability and
| cover most (but not all) of the risk, leaving the developer on
| the hook for a portion of it.
| muskmusk wrote:
| I love it!
|
| Finally some clear thinking on a very important topic.
| blibble wrote:
| > We're navigating a tightrope as Superintelligence nears. If the
| West slows down unilaterally, China could dominate the 21st
| century.
|
| I never understood this argument
|
| as a non-USian: I'd prefer to be under the Chinese boot rather
| than having all of humanity under the boot of an AI
|
| and it is certainly no reason to try to do everything we possibly
| can to try and summon a machine god
| socalgal2 wrote:
| > I'd rather be under the Chinese boot than having all of
| humanity under the boot of an AI
|
| That is not the options being offered. The options are under
| the boot of a Western AI or a Chinese AI. Maybe you'd prefer
| the Chinese AI boot to the Western AI boot?
|
| > certainly no reason to try to increase the chance of
| summoning a machine god
|
| The argument is that this is inevitable. If it's possible to
| make AGI someone will eventually do it. Does it matter who does
| it first? I don't know. Yes, making it happen faster might be
| bad. Waiting until someone else does it first might be worse.
| hiAndrewQuinn wrote:
| If you financially penalize AI researchers, either with a
| large lump sum or in a way which scales with their expected
| future earnings, take you pick, and pay the proceeds to the
| people who put together the very cases which lead to the
| fines being levied, you can very effectively freeze AGI
| development.
|
| If you don't think you can organize international cooperation
| around this you can simply put such people on some equivalent
| of an FBI type Most Wanted list and pay anyone who comes
| forward with information and maybe gets them within your
| borders as well. If a government chooses to wave its dick
| around like this it could easily cause other nations to copy
| the same law, this instilling a new global Nash equilibrium
| where this kind of scientific frontier research is verboten.
|
| There's nothing inevitable at all about that. I hesitate to
| even call such a system extreme, because we already employ
| systems like this to intercept e g. high level financial
| conspiracies via things like the False Claims Act.
| socalgal2 wrote:
| In my world there are multiple countries who each have an
| incentive to win this race. I know of no world where you
| can penalize AI researchers across international boundaries
| nor to believe your scenario could ever play out. You're
| dreaming if you think you could actually get all the
| players to co-operate on this. It's like expecting the
| world to come together on climate change. It's not
| happening and it's not going to happen.
|
| Further, it doesn't take a huge lab to do it. You can do it
| at home. It might take longer but there's an 1.4kg blob in
| everyone's head as proof of concept and does not take a
| data center.
| blibble wrote:
| > I know of no world where you can penalize AI
| researchers across international boundaries nor to
| believe your scenario could ever play out.
|
| mossad could certainly do it
| blibble wrote:
| > The options are under the boot of a Western AI or a Chinese
| AI. Maybe you'd prefer the Chinese AI boot to the Western AI
| boot?
|
| given Elon's AI is already roleplaying as hitler, and
| constructing scenarios on how to rape people, how much worse
| could the Chinese one be?
|
| > The argument is that this is inevitable.
|
| which is just stupid
|
| we have the agency to simply stop
|
| and certainly the agency to not try and do it as fast as we
| possibly can
| socalgal2 wrote:
| "We" do not as you can not control 8 billion people
| blibble wrote:
| it's certainly not that difficult to imagine
| international controls on fab/DC construction, enforced
| by the UN security council
|
| there's even a previous example of controls of this sort
| at the nation state level: those for nuclear enrichment
|
| (the cost to perform uranium enrichment is now less than
| building a state of the art fab...!)
|
| as a nation state (not facebook): you're entitled to
| enrich, but only under the watchful eye of the IAEA
|
| and if you violate, then the US tends to bunker-bust you
|
| this paper has some ideas on how it might work: https://c
| dn.governance.ai/International_Governance_of_Civili...
| mattnewton wrote:
| > we have the agency to simply stop
|
| This is worse than the prisoner's dilemma- the "we get
| there, they don't" is the highest payout for the decision
| makers who believe they will control the resulting super
| intelligence.
| MangoToupe wrote:
| > The options are under the boot of a Western AI or a Chinese
| AI.
|
| This seems more like fear-mongering than based on any kind of
| reasoning I've been able to follow. China tends to keep
| control of its industry, unlike the US, where industry tends
| to control the state. I emphatically trust the chinese state
| more than out own industry.
| gwintrob wrote:
| I'm biased because my company (Newfront) is in insurance but
| there are a lot of great points here. This jumped out: "By 2030,
| global AI data centers alone are projected to require $5 trillion
| of investment, while enterprise AI spend is forecast to reach
| $500 billion."
|
| There's a mega trend of value concentrating in AI (and all the
| companies that touch/integrate it). Makes a ton of sense that
| insurance premiums will flow that direction as well.
| blibble wrote:
| > This jumped out: "By 2030, global AI data centers alone are
| projected to require $5 trillion of investment, while
| enterprise AI spend is forecast to reach $500 billion."
|
| and by 2040 it will be $5000 trillion!
|
| and by 2050 it will be $5000000 quadrillion!
| gwintrob wrote:
| Ha, of course. A lot easier to forecast in a spreadsheet than
| actually make this happen. Based on the progress in AI in the
| past couple years and the capabilities of the current models,
| would you bet against that growth curve?
| blibble wrote:
| yes, there's not $5 trillion of dumb money spare
|
| (unless softbank has been hiding it under their mattress)
| choeger wrote:
| Is there _any_ indication whatsoever that there 's even a glimpse
| of artificial intelligence out there?
|
| So far, I have seen language models that, quite impressively,
| translate between different languages, including programming
| languages and natural language specs. Yes, these models use a
| wast (compressed) knowledge from pretty much all of the internet.
|
| There are also chain of thought models, yes, but what kind of
| actual intelligence can they achieve? Can they formulate novel
| algorithms? Can they formulate new physics hypotheses? Can they
| write a novel work of fiction?
|
| Or aren't they actually limited by the confines of what we as a
| species already know?
| roenxi wrote:
| You seem to be part of a trend where most humans are defined as
| unintelligent - there are remarkably few people out there
| capable of formulating novel algorithms or physics
| hypothesises. Novels there are a few more if we admit
| unreadable slop produced by people who really should choose
| careers other than writing. It speaks to the progress that
| machines have made that traditional tests of intelligence, like
| holding a conversation or doing well on an undergraduate-level
| university test, apparently no longer measure anything of
| importance related to intelligence.
|
| If we admit that even relatively stupid humans show some levels
| of intelligence, as far as I can tell we've already achieved
| artificial intelligence.
| yahoozoo wrote:
| > Is there any indication whatsoever that there's even a
| glimpse of artificial intelligence out there?
|
| no
| Animats wrote:
| For this to work, large class actions are needed. If companies
| are liable for large judgements, companies will insure against
| them. If not, companies will not try to avoid harms for which
| they need not pay.
| janalsncm wrote:
| > As insurers accurately assess risk through technical testing
|
| If that's not "the rest of the owl" I don't know what is.
|
| Let's swap out superintelligence for something more tangible.
| Say, a financial crash due to systemic instability. How would you
| insure against such a thing? I see a few problems, which are even
| more of an issue for AI.
|
| 1. The premium one should pay depends on the expected risk, which
| is damage from the event divided by the chance of event
| occurring. However, quantifying the numerator is basically
| impossible. If you bring down the US financial system, no
| insurance company can cover that risk. With AI, damage might be
| destruction of all of humanity, if we believe the doomers.
|
| 2. Similarly, the denominator is basically impossible to
| quantify. What is the chance of an event which has never happened
| before? In fact, having "insurance" against such a thing will
| likely create a moral hazard, causing companies to take even
| bigger risks.
|
| 3. On a related point, trying to frame existential losses in
| financial terms doesn't make sense. This is like trying to take
| out an insurance policy that will protect you from Russian
| roulette. No sum of cash can correct that kind of damage.
| brdd wrote:
| Thanks for the thoughtful response! Some replies:
|
| 1. Someone is always carrying the risk; the question is, who it
| should be? We suggest private markets should price and carry
| the first $10B+ before the government backstop. That
| incentivizes them to price and manage it.
|
| 2. Insurance has plenty of ways to manage moral hazard (e.g.
| copays). Pricing any new risk is hard, but at least with AI you
| can run simulations, red-team, review existing data, etc.
|
| 3. We agree on existential losses, but catastrophic events can
| be priced and covered. Insurers enforcing compliance with
| audits/standards would help them reduce catastrophes, in turn
| reducing the risk of many existential risks.
| yahoozoo wrote:
| With no skin in the game, it will be either cool if super
| intelligence happens or it doesn't and I just get to enjoy some
| schadenfreude. Either all of these people are geniuses or they're
| Jonestown members.
___________________________________________________________________
(page generated 2025-07-15 23:00 UTC)