[HN Gopher] The Pentagon inches toward letting AI control weapons
___________________________________________________________________
The Pentagon inches toward letting AI control weapons
Author : jonbaer
Score : 59 points
Date : 2021-05-13 18:31 UTC (4 hours ago)
(HTM) web link (www.wired.com)
(TXT) w3m dump (www.wired.com)
| davecheney wrote:
| Big WOPR energy
| bovermyer wrote:
| If they're going to go down this route, I demand that they start
| testing humanoid variants, as well as living tissue over a metal
| endoskeleton.
| bserge wrote:
| Only if they fix it to sound like a strong Austrian-American.
| lostlogin wrote:
| Having just finished some documentation on cryogen handling,
| I'm ready. As a further tangent and something that came as
| news to me - some cryogens can condense oxygen out of the
| atmosphere and if oil or grease is present there is a risk of
| fire. Asphyxiation, frostbite, fire.
| sneak wrote:
| Sounds to me like easy money.
| pibefision wrote:
| I saw this in 1983. War games.
| hcurtiss wrote:
| While there are obvious perils here, I fear it would be
| tremendously irresponsible to let our adversaries own this space.
| AI weapons must be developed if for no other reason than to
| battle AI weapons.
| qwertox wrote:
| There is a science fiction novel by Stanislaw Lem titled _Peace
| on Earth_ which touches this topic.
|
| https://en.wikipedia.org/wiki/Peace_on_Earth_(novel)
|
| > The evolution of artificial intelligence has allowed major
| world powers to sign a rather curious treaty: the Moon is
| divided into national zones (proportional to each nation's
| Earth real estate) and all weapons development and production
| must be moved there to be handled by factories. This is
| supposed to completely demilitarize Earth, achieving the long-
| sought dream of world peace. A MAD stabilizing factor is
| apparently preserved by the ability of countries, in case of
| war, to quickly ship weapons down from the Moon.
| sitkack wrote:
| I have only a single downvote, so I'll leave a comment instead.
| This will directly lead to an AI arms race. We need to put a
| pin in this just like chemical weapons. Our civilization might
| not survive a WW1 level experiment.
|
| If we go down this path, and we survive, earth will be the best
| approximation of hell in heaven.
| gus_massa wrote:
| Do you realize it's the same reason used to start the Manhattan
| project?
|
| Do you realize that only one country nuked civilian cities?
| proggy wrote:
| I disagree. While I concede that, if there is credible
| information that an adversary is developing AI weapons that
| need to be counterbalanced to reach some sort of Nash
| equilibrium, simply putting time and effort into the
| development of any adversarial technology will turn up the
| temperature in the space, so to speak. More work in this area
| means more capability, more complexity, more risk of misuse,
| and a heightened risk of the possibility of additional actors.
| Look no further than the theory of deterrence, and how it
| shifted from a tolerable short term strategy during the Cold
| War (one with few actors), to a high risk strategy as the trove
| of technology was enlarged, the likelihood of it spreading
| increased, and the number of active and potential actors
| increased dramatically also.
|
| In my view AI weapons development demands a judicious, steady,
| paced, and thoroughly informed approach to keep the whole space
| from burning up --- figuratively and literally.
| elefanten wrote:
| High risk strategy? Deterrence via MAD is overwhelmingly what
| keeps the relative peace between the major nuclear powers.
|
| It's not a great situation overall, but you couldn't uninvent
| nukes at that point, so it has proven remarkably effective
| thus far.
|
| Same dynamic with AI/ML --- can't uninvent. If one
| aggression-minded power has it, balance will be required for
| stability.
| alexfromapex wrote:
| We can use AI to control things but having an AI decide to
| attack or not without human sign off is a terrible terrible
| idea
| yboris wrote:
| 100% disagree. We have treaties that prevent use of gas
| weapons. We need governments to coordinate and cooperate rather
| than race each other to murderous death.
|
| https://www.stopkillerrobots.org/
| tablespoon wrote:
| > 100% disagree. We have treaties that prevent use of gas
| weapons.
|
| I don't have a link, I read a pretty interesting analysis is
| that the only reason we have such treaties is because those
| weapons are ineffective against a 1st tier military and are
| also useless to them. Basically, defense is possible as the
| needed PPE is relatively cheap and can be used effectively by
| highly-trained troops, and high-tech conventional weapons
| used by a trained force are far more effective than gas.
|
| So basically, those treaties were about giving up something
| advanced militaries would have abandoned anyway.
| jtolmar wrote:
| > I don't have a link
|
| It's probably this:
| https://acoup.blog/2020/03/20/collections-why-dont-we-use-
| ch...
| tablespoon wrote:
| That's it. Thanks!
| neatze wrote:
| It is better to have AI weapons fighting each other, then to
| have 19-24 years old fighting each other.
| the_only_law wrote:
| I imagine in a number of cases it will be more like AI
| weapons fighting 19-24 year olds.
| neatze wrote:
| True, but what option will you choice:
|
| 1. Send trained 19-30 years old to risk there's life,
| minimum civilian casualties, but highest chance of combat
| casualties.
|
| 2. Bomb urban area out of existence, with massive
| civilian casualties, medium chance of combat casualties.
|
| 3. Send terminators with medium civilian causalities and
| 0 combat casualties.
| yboris wrote:
| If you take people out of war, wars are more likely to
| happen. There will be less opposition from the public
| when it's just "toasters" and "microwaves" (electronics)
| being sent overseas to fight "the bad guys".
|
| Given the track record of how many _unjust_ wars that
| have been fought, we need _more_ preventative measures
| against wars, not more technology to make wars easier and
| more palatable.
| mywittyname wrote:
| I wonder how developed nations will deal with #3s being
| deployed on their shores. It might be much easier to
| sneak in Terminators than any sorts of bombs. And the
| terror factor would be off the charts.
| yboris wrote:
| Are you thinking battles happen in open fields with no one
| around other than rows of opposing forces? Do you think no
| bad actors are going to have access to such weapons? The
| world is a lot more complex than what you describe.
| neatze wrote:
| How did you made such conclusions from my sentence, I
| don't really understand.
|
| Civilians will suffer the most in wars and in violent
| conflict times.
|
| Urban warfare is the highest intensity warfare, with
| highest casualty rates both combat and civilians.
|
| First failed attempt of drone attack with chemical weapon
| was in 90's in Japan, I believe this if first recorded
| attempt of terrorist attack using drone.
|
| AI warfare with sci-fi flying, crawling, walking
| terminators will be more humane then carpet bombing
| population into oblivion (Soviet-Afghan War and Vietnam
| War), it will have substantially less combat and civilian
| casualties, in general it will increase kinetics (weapons
| effects), precision, accuracy, and decisiveness (RISTA).
| yboris wrote:
| You original sentence was implying that the deaths would
| occur to just 19-24 year olds - which clearly isn't going
| to be the case if the warfare happens in civilian
| quarters.
|
| You created a false dichotomy - as if those are the only
| two outcomes and we have to choose between them. When
| obviously (as your follow up comment shows), there are
| other scenarios where numerous innocent non-combatants
| will die.
| jonas21 wrote:
| Ironically, the best way to get such a treaty may be to have
| two or more countries with sufficiently advanced AI weapons
| that it becomes in each of their best interest to stop the
| others.
| seg_lol wrote:
| Lets have an AI weapon Olympics and the world can see why
| we can't have them.
| srcmap wrote:
| How do we enforce any treaties the prevent AI usage?
| mLuby wrote:
| With an AI, specifically an AI that kills younger AIs
| (called the Great Old 1).
| ethbr0 wrote:
| A cynic would say that we only banned chemical weapons
| because they weren't particularly useful in major power
| combat.
|
| See: cluster munition and mine bans
| munk-a wrote:
| That cynic would be wrong since chemical weapons are
| extremely effective in combat. When the scales aren't even
| (i.e. something like the vietnam war) the stronger side can
| afford equipment that allows their soldiers to operate
| effectively while the other side is unable to respond to
| aggression greatly lowering expected casualties. With
| powers on par you still have the factor of well coordinated
| chemical attacks allowing breakthroughs to be executed much
| easier.
|
| Chemical weapons being banned is an example of a mostly
| inexplicably altruistic action on the part of major powers.
| Well, inexplicable if you're a cynic.
| dmitriid wrote:
| > That cynic would be wrong since chemical weapons are
| extremely effective in combat.
|
| They are not. It's the most ineffective weapon outside
| swords and bows.
|
| A sibling comment provided the link discussing it:
| https://acoup.blog/2020/03/20/collections-why-dont-we-
| use-ch...
| saiya-jin wrote:
| Yes and no. There was still Agent orange in Vietnam.
| There is still uranium ammo in ie A-10 Warthog which
| provably sows all kinds of cancer and deformities in
| children being born in the area decades after usage. Not
| poisonous gases that burn your lungs per se, but
| generally if the deal is too sweet to refuse, it stays on
| the table for some time.
| rich_sasha wrote:
| These aren't really chemical weapons. They have a
| non-"chemical" primary use, and also happen to be toxic.
| Depleted uranium will harm people, but in timescales
| useless in a war (unless it hits them fast, that is).
|
| Also, surely, none of those are anywhere near as bad as
| mustard gas. Yes, agent orange gives people cancer, but
| presumably not everyone (?), and anyway mustard gas
| _immediately burns your lungs_. That's just a different
| league.
| mywittyname wrote:
| Chemical weapons are great when you have a long line of
| densely packed, dug in troops. But much less so against
| modern armies that don't operate shoulder-to-shoulder and
| carry protection with them.
|
| They are very effective against civilian populations
| though. Which is likely the real reason behind the ban.
| Especially when you consider that certain chemical
| weapons are only selectively banned; they can't be used
| against a target for toxicity, but it's completely fine
| to use it to use it in another manner. White phosphorous
| is most famous example if this, it's a war crime to
| poison people with it, but a-okay to light them on fire
| with it, even though that's a horrific way to die.
|
| The only explanation for this hypocrisy is that chemical
| weapons don't have much value against military targets
| anymore. So substances are banned unless they are
| actually useful in other capacities.
| seppin wrote:
| > We have treaties that prevent use of gas weapons.
|
| Ask Syrian civilians how effective those treaties are. You
| can't, of course.
| jldugger wrote:
| > it would be tremendously irresponsible to let our adversaries
| own this space
|
| Can't wait for your finished script of Dr Strangelove 2.
| mLuby wrote:
| Wonder when we'll get to the MAINPART: Militarized Artificial
| Intelligence Non-Proliferation And Reduction Treaty.
| tablespoon wrote:
| >> it would be tremendously irresponsible to let our
| adversaries own this space
|
| > Can't wait for your finished script of Dr Strangelove 2.
|
| It's easy to mock and parody MAD, arms races, etc., but
| certain conditions make them necessary and rational, and
| those conditions can't easily be changed.
| swiley wrote:
| Now I understand how the anti-nuclear people in the 20th
| century felt.
| ethbr0 wrote:
| Nuclear MAD seems to have worked out well, so far.
| titzer wrote:
| ...until it doesn't. I would highly recommend a visit to
| Hiroshima Peace Memorial Museum if you'd like a different
| perspective on nuclear weapons, rather than a
| American/western one.
|
| And...man, I wish we had that $5.5 trillion back
| [https://www.brookings.edu/the-hidden-costs-of-our-nuclear-
| ar...].
| ethbr0 wrote:
| See also: Guernica, Warsaw, Dresden, London, Wesel,
| Tokyo, Hargeisa.
|
| Nuclear weapons are bigger explosives. Not different
| explosives.
| titzer wrote:
| Every one of those cities was bombed for _weeks_ ,
| sometimes months. Hiroshima and Nagasaki were _single_
| bombs. They cause massive radiation exposure and fallout.
| Tens of thousands of people were poisoned to death in the
| aftermath of the Japanese bombings. You haven 't seen the
| suffering of those people and cannot even imagine the
| hell they were subjected to. And today, we have weapons
| hundreds of times as powerful. City-ending megaweapons by
| the thousands. And yes, nuclear winter is a real thing
| too; we could very seriously impact the global climate
| and cause mass extinctions in a couple of hours. A
| nuclear war is a planet-scale atrocity that will ends
| billions of human lives.
|
| I've been to the Hiroshima museum. Like I said, it will
| change your perspective. I'm not really all that inclined
| to continue this interchange given your flippance.
| [deleted]
| ethbr0 wrote:
| If you want to dismiss 100,000 Tokyo citizens burning to
| death in two nights because it doesn't fit the narrative
| of "nuclear is the worst," then that's your decision.
|
| But personally, I consider them equally horrific. And as
| stated previously, if MAD prevents either from happening,
| the risk seems worth it.
| anigbrowl wrote:
| The problem is that MAD is only reliable in the absence of
| proliferation, but we go to such lengths to prevent
| proliferation that it causes other problems. When you get
| down to it a nuke is not that _that_ hard to build with the
| resources available to almost any state, so the fragility
| of deterrence has a distorting effect on international
| relations.
| yboris wrote:
| "so far" included very many near misses - where individuals
| with explicit authority to launch counter attacks were in
| position to decide whether to launch NUCLEAR MISSILES AT A
| COUNTRY -- because some sensors malfunctioned.
|
| See Stanislav Petrov - the man who prevented an apocalypse
| https://en.wikipedia.org/wiki/Stanislav_Petrov
| ethbr0 wrote:
| Yes. I'll take near misses with nuclear Armageddon over
| total war.
|
| The anti-nuclear weapons crowd conveniently forgets that
| the alternative to MAD isn't reliably peace.
| yboris wrote:
| What % chance of nuclear holocaust per year are you
| wiling to take over the "non reliable peace"?
|
| We're talking about potential existential risk - a wiping
| out of all of humanity (through severe ecological
| poisoning after a barrage of cross-globe nuclear
| strikes).
| ethbr0 wrote:
| 0.01%?
|
| If my math works out, that's a 10% chance of an
| occurrence every 1,000 years.
| yboris wrote:
| Thank you for providing a number. I have seen estimates
| of what the risk _was_ - as high as 1% per year for much
| of the duration of the Cold War.
| ethbr0 wrote:
| So if we said '55 - '90 (35 years), that'd be a 30%
| chance of nukes being launched @ 1% yearly risk.
|
| That's more on the fence for me.
|
| On one hand, it's high (and the impact would have been
| catastrophic!). On the other hand, it prevented NATO /
| Warsaw Pact mechanized conflict across Europe (which also
| would have been catastrophic).
|
| Tough call. Especially with the uncertainty that, were
| you to give up your nuclear weapons, there's no guarantee
| your counterparties would actually do the same.
|
| In the 60s foundational position papers, there's
| definitely a strong thread of "We will commit to MAD not
| because it's the best approach, but because it's the best
| zero trust approach."
| yboris wrote:
| > That's more on the fence for me.
|
| What? You claimed you are comfortable with 0.01% chance,
| but now you're "on the fence" with a 1% chance?
|
| An analogy: some chance of your child being occasionally
| bullied in school, or a 0% chance of bullying but a 30%
| chance they'll get murdered.
|
| You say "tough call" -- and I'm baffled.
| ethbr0 wrote:
| Comfortable and on the fence are two different points,
| no?
| eternalban wrote:
| Total war will not leave the planet radioactive and
| uninhabitable. Absolute worst case is something analogous
| to damage of WWII. (Note: we survived WWII.)
|
| Armageddon is lights out for humanity.
| ethbr0 wrote:
| WWII was fought with the science and economic might of
| the 1930/40s. We've come a bit farther since then.
|
| People seem to seriously lack imagination concerning what
| a scientific and economic major power might do if its
| existence were threatened by a hated enemy. (Gene drives,
| anyone?)
| titzer wrote:
| We cannot allow a missile gap!
| daveslash wrote:
| Mr. President, we cannot allow a mineshaft gap!
| ljd wrote:
| Famous last words.
|
| As an AI practitioner, I want us invest in research to
| manipulate other AI military systems but not use AI ourselves.
|
| Take a page from the HFT book.
| tolbish wrote:
| I am curious where you would draw the line between "AI
| military systems" and existing systems in use such as Iron
| Dome and other algorithm-assisted weapons.
| eternalban wrote:
| Two obvious dimensions are:
|
| - agency: the machine makes decisions on its own.
|
| - defensive vs offensive action
| tolbish wrote:
| Perhaps I'm missing something, but that first dimension
| seems far from obvious. To continue my Iron Dome example,
| isn't the system making decisions on its own when
| intercepting anything it classifies as an incoming
| ballistic threats?
| eternalban wrote:
| These are obvious _dimensions_ that need to be
| considered. Of course there can be discussion as to
| boundaries drawn in that (here 2-d) space as to
| 'acceptable' and 'unacceptable'. There may be other
| dimensions as well, such as 'decision making input
| sources', etc.
|
| As to your specific example, while system x may have
| 'full agency (autonomous, willful)', if it is purely
| 'defensive' in nature, it may indeed fall within
| acceptable zone.
| Dah00n wrote:
| Well, you already watered it down to be a worthless
| principle when "AI bad" became "AI sometimes bad". OP
| said "not use AI". Using AI for example in missiles for
| defence is an offensive weapon as after the defense by
| the AI you are stronger than without it and you cannot
| take out missiles without attacking. The point is to
| attack the enemy's AI system _without_ using AI.
| Otherwise it 's still the same AI versus AI race that
| will most likely end bad for humans at some point.
| Sebb767 wrote:
| > - defensive vs offensive action
|
| So if Iron dome launches a rocket in defense it's okay,
| but that rocket mustn't use AI to find its target? I see
| your general point, but that's going to be a line that's
| very hard to draw.
| sillysaurusx wrote:
| I completely agree, and as an ML researcher, having that
| opinion feels heretical. So many people are of a "do no harm"
| mindset that I worry it means "or I won't work with you." And I
| quite like working with various researchers.
|
| I mentioned it publicly once, and I'm bracing for the day that
| some established researcher quote tweets it and says "we do not
| need this kind of thinking in our community, and we have a duty
| to exclude it" or some such.
|
| I don't care much though. You seem right, and that's good
| enough for me.
| elefanten wrote:
| We need researchers who will follow their moral instincts far
| more than we need researchers who care about conforming to
| the sociocultural norm. So, thank you.
|
| I guess I shouldn't just say "researchers" --- it's what we
| need in our citizens of humanity.
| mistrial9 wrote:
| you are directly contradicting the code of military
| command, prepare for social credit demotion! </snark>
| at_a_remove wrote:
| Of course, the deaths of innocents will dissolve in the calculus
| of responsibility this way, as each slice tends toward the
| infinitesimal. The spec was wrong for those conditions, the
| optics company we hired sold us something defective, the training
| set was bad, the review wasn't thorough, we got underpowered
| CPUs, the targeting system on those pinhead missiles has a known
| flaw, on and on, until absolutely nobody is at fault. Nobody will
| court martial a neural net and that'll be the end of it.
| HideousKojima wrote:
| They already do, as far as I understand it Phalanx CIWS can
| identify and choose to target an object based entirely on radar
| and other internal sensors.
| tachyonbeam wrote:
| If anyone is curious what this automated missile defense looks
| like: https://www.youtube.com/watch?v=biyUjm4KZio
|
| It's an automated, super fast, self-aiming gatling gun.
| systemvoltage wrote:
| Anduril (start up by the founder of Oculus) is also working on
| AI-defense systems: https://www.anduril.com/
| bserge wrote:
| Has anyone tried attaching weapons (lasers, tasers, even
| guns) and a somewhat smart targeting system to a drone yet?
| ethbr0 wrote:
| https://m.youtube.com/watch?v=4l0Dh6qJ3RE&t=48s
|
| In war, weapon speed triumphs safety concerns.
|
| And hypersonic and laser weaponry is only going to decrease
| that.
|
| We've been in a delegated authority regime since the USS
| Vincennes / Iranian Air Flight 655.
|
| There are already shifts towards a delegated-preemption regime
| in EW (see: efforts to equip EW suites with self-adaptive
| capability). I can't imagine we won't see the same thing with
| actual weapons.
|
| Sure, put a human in the loop, if there's time. Otherwise...
| criddell wrote:
| And didn't the Aegis Combat System do that as well?
| nradov wrote:
| Yes it can.
|
| https://www.thedrive.com/the-war-zone/39508/how-the-aegis-
| co...
| dragonwriter wrote:
| Phalanx CIWS is included as part of the Aegis Combat System
| (as well as being used on almkst all US Navy ships without
| Aegis, too) so, if Phalanx can do it, Aegis can necessarily
| also do it.
| openasocket wrote:
| I probably need to read the report to get more information, but
| I'm just having trouble visualizing this hypothetical future
| where we need a swarm of autonomous agents to identify their own
| targets and threats at all level without any human intervention,
| and where any manual control would be too limiting. There are
| circumstances, like CIWS or CRAM systems automatically shooting
| down incoming munitions, where that comes in handy. I'd also be
| more comfortable with putting autonomy in the hand of anti-
| aircraft systems (either SAMs or UAVs). In that case you can
| program in parameters and I can trust a machine to reliably stick
| to those parameters. But if we're talking about a machine gun
| mounted on an unmanned ground vehicle? If you really need to
| shave off that second or two of human reaction time, that's not a
| real battle, that's a showdown from a Western movie. I don't
| think it's too much effort to send a quick picture to a human, or
| have a human being specify where they can engage and for how long
| for area suppression.
|
| I think it will be multiple decades before AI has progressed to
| the point where I'd trust it to correctly identify enemy soldiers
| vs non-combatants as well or better than a human being.
| Fortunately, I think it will be even longer before that's
| something we need.
| v8dev123 wrote:
| The Pentagon generals must be ill informed about AI. I believe
| they think it as a magic and can do everything correctly.
|
| This is very dangerous!!
|
| Look at Tesla Self Drive, It literally killed me last night.
| Weapons are nothing!!
| nradov wrote:
| The Pentagon has already been letting AI control weapons for
| decades. The Mk 60 naval mine entered service in 1979. It used
| automatic pattern matching to detect a target and then launched a
| guided torpedo. Modern AI algorithms are more complex but nothing
| has fundamentally changed.
|
| https://en.wikipedia.org/wiki/Mark_60_CAPTOR
| neatze wrote:
| More resent technology; javelin missiles, hellfire missiles,
| and cruise missiles are self-guided and AI powered weapon
| systems.
| athenot wrote:
| I'm reminded of that line from Shrek:
|
| "Some of you might die, but that's a sacrifice I'm willing to
| make."
___________________________________________________________________
(page generated 2021-05-13 23:03 UTC)