[HN Gopher] A computer can never be held accountable
___________________________________________________________________
A computer can never be held accountable
Author : zdw
Score : 110 points
Date : 2025-02-03 22:01 UTC (58 minutes ago)
(HTM) web link (simonwillison.net)
(TXT) w3m dump (simonwillison.net)
| a3w wrote:
| Wisdom from '79!
|
| Could also be wisdom from the fifties, found again.
| throwitaway222 wrote:
| AI will definitely, without a doubt, make executive decisions. It
| already makes lower level decisions. The company that runs the
| AI, can be held accountable. (meaning less likely OpenAI or the
| foundational LLM, but more likely the company calling LLMs that
| make decisions on car insurance, etc...)
| chasing wrote:
| Executives have always used decision-making tools. That's not
| the point. The point is that the executive can't point to the
| computer and say "I just did what it said!" The executive is
| the responsible party. She or he makes the choice to follow the
| advice of the decision-making tool or not.
| owlbite wrote:
| The scary thing for me is when they've got an 18 year old
| drone operator making shoot/no-shoot decision on the basis of
| some AI metadata analysis tool (phone A was near phone B, we
| shot phone B last week...).
|
| You end up with "Computer says shoot" and so many cooks
| involved in the software chain that no one can feasibly be
| held accountable except maybe the chief of staff or the
| president.
| stavros wrote:
| Yeah but it's fine because nobody cares if you kill a few
| thousand brown people extra.
| themanmaran wrote:
| Thing is, the chain of responsibility gets really muddled over
| time, and blame is hard to dish out. Let's think about denying
| a car insurance claim:
|
| The person who clicks the "Approve" / "Deny" button is likely
| an underwriter looking at info on their screen.
|
| The info they're looking at get's aggregated from a lot of
| sources. They have the insurance contract. Maybe one part is AI
| summary of the police report. And another part is a repair
| estimate that gets synced over from the dealership. A list of
| prior claims this person has. Probably a dozen other sources.
|
| Now what happens if this person makes a totally correct
| decision based on their data, but that data was wrong because
| the _syncFromMazdaRepairShopSFTP_ service got the quote data
| wrong? Who is liable? The person denying the claim, the
| engineer who wrote the code, AWS?
|
| In reality, it's "the company" in so far as fault can be
| proven. The underlying service providers they use doesn't
| really factor into that decision. AI is just another tool in
| that process that (like other tools) can break.
| etaioinshrdlu wrote:
| I suspect within our lifetimes people will grant AI and robots
| rights, but with rights come responsibilities, and finally we
| will be able to hold the computer accountable!
| rzzzt wrote:
| What does the backside say? I can make out the title at the
| bottom: "THE COMPUTER MANDATE", but not much else.
| Terr_ wrote:
| Others have tried to figure out exactly what actual paperwork
| that particular image might be from (e.g. a memo or
| presentation flashcards) but AFAIK it's still inconclusive.
|
| A plausible transcription:
|
| > THE COMPUTER MANDATE
|
| > AUTHORITY: WHATEVER AUTHORITY IS GRANTED IT BY THE SOCIAL
| ENVIRONMENT WITHIN WHICH IT OPERATES.
|
| > RESPONSIBILITY: TO PERFORM AS PRE-DIRECTED BY THE PROGRAMMER
| WHENEVER INSTRUCTED TO DO SO
|
| > ACCOUNTABILITY: NONE WHATSOEVER.
| shakna wrote:
| The first word of the paragraph appears to be, "authority".
|
| I can't quite make out the first paragraph, contents.
|
| But a bit after that comes under another semi-title
| "responsibility" and part of it reads:
|
| > TO PERFORM AS PRE-DIRECTED BY THE PROGRAMMER WHENEVER
| INSTRUCTED TO DO SO
|
| This [0] small link might make it easier to read bits.
|
| [0] https://imgur.com/rnW2RJa
| canterburry wrote:
| Isn't accountability simply to prevent repeat bad behavior in the
| future...or is it meant to be punitive without any other
| expectations?
|
| If meant to prevent repeat bad behavior, then simply
| reprogramming the computer accomplished the same end goal.
|
| Accountability is really just a means to an end which can be
| similarly accomplished in other ways with machines which isn't
| possible with humans.
| brap wrote:
| Right, but as long as you have humans, you will probably need
| accountability.
|
| If a human decided to delegate killing enemy combatants to a
| machine, and that machine accidentally killed innocent
| civilians, is it really enough to just reprogram the machine? I
| think you must also hold the human accountable.
|
| (Of course, this is just a simplified example, and in reality
| there are many humans in the loop who share accountability,
| some more than others)
| miltonlost wrote:
| You fundamentally don't understand either accountability or
| what people mean by "computers can't be held accountable". Who
| is at fault when a computer makes a mistake? That is
| accountability.
|
| You cannot put a computer in jail. You cannot fine a computer.
| Please, stop torturing what people mean because you want AI to
| make decisions to absolve you of guilt.
| chgs wrote:
| What is the purpose of putting a person in jail or fining
| them?
|
| Retribution? Reformation? Prevention?
| miltonlost wrote:
| Mixture of all three, but for the purposes of
| "accountability", prevention of the behavior in the first
| place. But I don't want to debate prisons when that's
| derailing the larger point of "accountability in
| AI/computers".
| cmgriffing wrote:
| Consider the Volkswagen scandal where code was written that
| fudged the results when in an emissions testing
| environment.
|
| The only person to see major punishment for that was the
| software dev that wrote the code, but that decision to
| write that code involved far more people up the chain. THEY
| should be held accountable in some way or else nothing
| prevents them from using some other poor dev as a
| scapegoat.
| echoangle wrote:
| In this context, prevention. So people see what happens if
| they screw up in a negligent way and make sure to not do it
| themselves.
| canterburry wrote:
| What is the purpose of accountability?
| miltonlost wrote:
| To stop people from making illegal decisions ahead of time,
| and not just to punish them after. If there is no
| accountability to an AI, then a person making a killer
| robot would have no reason to not make a killer robot. If
| they were more to be imprisoned for making a killer robot,
| then they would be less likely to make a killer robot.
|
| In a world without accountability, how do you stop evil
| people from doing evil things with AI as they want?
| Terr_ wrote:
| I think you're confusing the tool with the user.
|
| Improving the tool's safety characteristics is _not_ the same
| as holding _the user_ accountable because they made stupid
| choices with unsafe tools. You want them to change their
| behavior, no matter how idiot-proofed their new toolset is.
| maxbond wrote:
| This makes sense if the computer was programmed that way
| _accidentally_. If the computer is a cut out to create
| plausible deniability, then reprogramming it won 't actually
| work. The people responsible will find a way to reintroduce a
| behavior with a similar outcome.
| wmf wrote:
| In practice they will try to avoid acknowledging errors and
| will never reprogram the computer. That's why a human appeals
| system is needed.
| Macha wrote:
| > If meant to prevent repeat bad behavior, then simply
| reprogramming the computer accomplished the same end goal.
|
| Note the bad behaviour you're trying to prevent is not just the
| specific error that the computer made, but delegating authority
| to the computer to the level that it was able to make that
| error without proper oversight.
| chasing wrote:
| You've set up an either-or here that fails to take into account
| a wide spectrum of thought around accountability and
| punishment.
|
| When it comes to computers, the computer is a tool. It can be
| improved, but it can't be held any more accountable than a
| hammer.
|
| At least that's how it should be. Those with wealth will do
| whatever they feel they need to do to shun accountability when
| they create harm. That will no doubt include trying to pin the
| blame on AI.
| 1shooner wrote:
| This sounds like a conflation of responsibility with
| accountability. A machine responsible for emitting a certain
| amount of radiation on a patient can and should be
| reprogrammed. The company and/or individuals that granted a
| malfunctioning radiation machine that responsibility need to be
| held accountable.
| ysofunny wrote:
| so then, neither can a crowd. not anymore, a crowd will be able
| to blame a computer now
| dumbfounder wrote:
| The human that decides to use the AI that makes decisions is the
| one that should be held accountable.
| nine_zeros wrote:
| Arrest the executives of companies that allow malicious use of
| AI?
|
| Second degree murder. Much like a car driver can't blame their
| car for the accident, a corporate driver shouldn't be allowed
| blame their software for the decision.
| echoangle wrote:
| Interesting that this comes up again after I just discussed
| this on here yesterday, but you actually can blame your car
| for accidents.
|
| If a mechanical or technical problem was the reason of the
| accident and you properly took care of your car, you won't be
| responsible, because you did everything that's expected of
| you.
|
| The problem would be defining which level of AI decision
| making would count as negligent. Sounds like you would like
| to set it at 0%, but that's something that's going to need to
| be determined.
| carlhjerpe wrote:
| While hard/impossible in practice, I agree.
|
| The Dutch did an AI thing:
| https://spectrum.ieee.org/artificial-intelligence-in-governm...
| Barrin92 wrote:
| there's the old joke
|
| _" It should be noted that no ethically-trained software
| engineer would ever consent to write a DestroyBaghdad procedure.
| Basic professional ethics would instead require him to write a
| DestroyCity procedure, to which Baghdad could be given as a
| parameter."_
|
| Removing yourself to one or more degrees from decision making
| isn't only an accident but is and will more and more be done to
| intentionally divert accountability. "The algorithm
| malfunctioned" is already one of the biggest get out of jail free
| cards and with autonomous systems I'm pretty pessimistic it's
| only going to get worse. It's always been odd to me that people
| focus so much on what broke and not who deployed it in the first
| place.
| miltonlost wrote:
| It's why I could never work at Meta, knowing how much I would
| feel responsible in aiding various genocides around the world.
| How any engineer there is able to ethically live with
| themselves is beyond me (but I also don't make that Meta money)
| hsbauauvhabzb wrote:
| I can defer decision making to a computer but I cannot defer
| liability.
|
| Computers have the final say on anything to do with computers, if
| I transfer money at my bank, a computer bug could send that money
| to the wrong account due to a solar ray. The bank has accepted
| that risk, and on some (significantly less liable but still
| liable) level, so have I.
|
| Interestingly, there are cases where I have not accepted any
| liability - records (birth certificate, SSN) held about me by my
| government, for example.
| kps wrote:
| > ... but I cannot defer liability.
|
| For that, you need a corporation.
| Terr_ wrote:
| > a computer bug could send that money to the wrong account due
| to a solar ray
|
| I think the original quote captures that with the qualifier "a
| _management_ decision ", which given that it was 1979 implies
| it's separate from other kinds of decisions being made by non-
| manager employees following a checklist, or the machines that
| were slowly replacing them.
|
| So a cosmic-ray bit-flip changing an account number would be
| analogous to an employee hitting the wrong key on a typewriter.
| lrvick wrote:
| Meanwhile I work in reproducible builds and remote attestation.
| We absolutely can and must hold computers accountable, now that
| we have elected them into positions of power in our society.
| byteknight wrote:
| You can only hold computers accountable if you can guarantee no
| outside modification. We still haven't ever successfully had a
| system that's not "pop-able" that I am aware of.
| h0l0cube wrote:
| Surely the company that is making profit out of said build
| systems and providing attestations holds some accountability.
| Someone wrote the code. Someone paid for the code to be written
| to a particular standard, under particular budget and
| resourcing constraints. Someone was responsible for ensuring
| the code was adequately audited. Someone claimed it was fit for
| purpose, and likely insured it as such, _and because_ they are
| ultimately responsible.
| nostrademons wrote:
| These days it seems like we can't hold humans accountable either.
| stevebmark wrote:
| I don't know if this is being shared intentionally given the
| timing of "The Gospel" AI target finder, but it is truly horrific
| that AI is being used this way and as an accountability target
| taeric wrote:
| I confess this line always upset me. It is cute, but it directly
| points to the idea that the main recourse for a mistake is to
| take umbridge with an individual that must have obviously been
| wrong.
|
| No. If a mistake is made and it impacts people, take action to
| make the impacted people whole and change the system so that a
| similar mistake won't be made again. If you want, you can argue
| that the system can be held accountable and changed.
|
| Further, if there is evidence of a bad actor that is purposely
| making choices that hurt people. Take action on that. But
| accountability of actors in the system is almost certainly immune
| by policy. And for good reason.
| miltonlost wrote:
| A system!!! Held accountable!!!! A system, just like a
| computer, cannot be held accountable for the reason that a
| system, as like a computer, is not alive and cannot actual be
| held accountable in a way that the system or computer cares.
|
| But what is a system made of? People who are doing bad
| decisions and should be held accountable for that. Without
| accountability of bad actors in systems, you get companies
| committing crimes because no one at the top rarely sees fines
| or jail time. The same immunity from responsibility you think
| is a good thing in a system is what I would say is corporate
| america's major sin.
|
| You're upset at the line because you make a fundamental
| misunderstanding of what it means for someone to be held
| accountable for something.
| coliveira wrote:
| This hits it on the nail. The issue is that humans are
| accountable, but systems are not. And smart humans learned
| how to hide behind systems to avoid accountability. That's
| the whole strategy of using corporations, a social structure
| that removes individuals from responsibility. A corporation
| can do pretty much anything, including criminal acts, and the
| humans benefiting from it are shield from the negative
| results except for financial losses. What we're seeing is
| just the whole strategy moving into the level of computer
| systems (and Google has already used this accountability
| skirting strategy for more than two decades).
| Handprint4469 wrote:
| I disagree. The main point of this line is not about what to do
| _after_ a mistake (assign blame, punish, etc), but rather about
| setting up the correct incentives _before_ anything happens so
| that a mistake is less likely.
|
| When you're accountable you suddenly have skin in the game, so
| you'll be more careful about whatever you're doing.
| medhir wrote:
| so if someone makes a change to the system... there's a
| _person_ somewhere holding themselves accountable for the
| faults of the system, no?
| coliveira wrote:
| No, if there are multiple people who in principle are not
| directly coordinated to make that happen. They can always
| point the finger at others and say they're not responsible
| for that bad outcome.
| landryraccoon wrote:
| Fwiw I agree with you.
|
| I feel that in general people obsess over assigning blame to
| the detriment of actually correcting the situation.
|
| Take the example of punishing crimes. If we don't punish theft,
| we'll get more theft right? But what do you do when you have
| harsh penalties for crime, but crime keeps happening? Do you
| accept crime as immutable, or actually begin to address root
| causes to try to reduce crime systemically?
|
| Punishment is only one tool in a toolbox for correcting bad
| behavior. I am dismayed that people are fearful enough of the
| loss of this single tool as to want to architect our entire
| society around making sure it is available.
|
| With AI we have a chance to chart a different course. If a
| machine makes a mistake, the priority can and should be fixing
| the error in that machine so the same mistake can never happen
| again. In this way, fixing an AI can be more reliable than
| trying to punish human beings ever could.
| gwern wrote:
| I would suggest an updated version, more germane to the current
| fast-developing landscape of AI agents: A
| COMPUTER CAN NEVER BE HELD ACCOUNTABLE THEREFORE WE
| MUST NEVER DENY THAT COMPUTERS CAN MAKE DECISIONS
| Terr_ wrote:
| I disagree, that's throwing away the 1979-era qualifier of
| _management_ decision, as distinct from the decisions made by
| an hourly employee (or computer) following a pre-made checklist
| (or program.) It 's not the same as FizzBuzz "deciding" to
| print something out.
|
| Related qualifiers might be "policy decision" or "design
| decisions".
| PaulHoule wrote:
| ... is not a moral subject. But animals can be moral subjects [1]
|
| [1] https://academic.oup.com/book/12087
| cyanydeez wrote:
| We let corporations be unaccountable, so why would we treat
| computers any different.
| 0xDEAFBEAD wrote:
| The implication here is that unlike a computer, a person or a
| corporation _can_ be held accountable. I 'm not sure that's true.
|
| Consider all of the shenanigans at OpenAI:
| https://www.safetyabandoned.org/
|
| Dozens of employees have left due to lack of faith in the
| leadership, and they are in the process of converting from
| nonprofit to for-profit, all but abandoning their mission to
| ensure that artificial intelligence benefits all humanity.
|
| Will anything stop them? Can they actually be held accountable?
|
| I think social media, paradoxically, might make it _harder_ to
| hold people and corporations accountable. There are so many
| accusations flying around all the time, it can be harder to
| notice when a situation is truly serious.
| dankwizard wrote:
| This feels very "I'm 12 and this is deep".
|
| If a bridge collapses, are you blaming the cement?
| hinkley wrote:
| Construction companies don't shrug and blame the concrete. Or
| at least nowhere near as often as companies that employ
| software in their customer interactions.
| wmf wrote:
| We should keep tapping the sign as long as people are still
| using "computer says no; nothing can be done" as a serious
| argument.
| doctorpangloss wrote:
| In AI crap, I think this crops up as, giant company asks
| vendor, "Indemnify us against XYZ, but we also want to own
| everything." My dude, that's what owning the thing entails:
| taking liability for it.
|
| The punchline will be that people will agree to whatever smoke
| and mirrors leads to sales.
| h0l0cube wrote:
| I take your point, but the cement mix can absolutely have an
| impact on the integrity of the bridge structure. But to further
| your point, the cement mix was either incorrectly specified, or
| inadequately provided, and the responsibility for that falls on
| one of the humans in the loop.
___________________________________________________________________
(page generated 2025-02-03 23:00 UTC)