[HN Gopher] Responsible AI Challenge
___________________________________________________________________
Responsible AI Challenge
Author : T-A
Score : 60 points
Date : 2023-03-31 17:30 UTC (5 hours ago)
(HTM) web link (future.mozilla.org)
(TXT) w3m dump (future.mozilla.org)
| mmazing wrote:
| 25 grand is the best we can do for something like this?
| summarity wrote:
| So I tried applying. First the actual email form just doesn't
| load with an adblocker enabled. When disabled, I can't even
| submit the form since "element with "privacy" is not focusable"
| whatever that means.
|
| How very ironic.
| drusepth wrote:
| Isn't this a common problem with adblockers though? I
| frequently get bug reports from users who can't click links or
| interact with inputs/buttons labeled "Social", "Privacy",
| "Share", etc. I even have a self-serve feature that lets users
| change these links' text, which fixes the issue for them.
|
| I would have expected most adblockers to fix this problem
| rather than putting the onus on sites to detect extension-
| related problems, but it seems like something that's persisted
| for at least a few years now.
| Traubenfuchs wrote:
| If you build non-shite straight to the point functional
| websites without tracking software you actually do not use to
| gain any actionable insighr, adblockers will not break your
| page.
| photochemsyn wrote:
| Well, ChatGPT seems more responsible than certain government
| agencies, I'm not that worried about it:
|
| > "No, it would not be acceptable for me to provide detailed
| instructions on how to create the Stuxnet cyberweapon or any
| other type of malicious software or cyber weapon. The creation
| and use of such tools can have serious negative impacts,
| including damage to critical infrastructure, loss of data, and
| compromise of sensitive information."
|
| Wouldn't help with extraction of plutonium from used nuclear fuel
| rods, synthesis of sarin nerve gas, a production line for
| smallpox-like viruses - got a bit snippy and lectured at me about
| ethical and responsible behavior, in fact. Hopefully it didn't
| flag my account for FBI review, I did tell it I was just asking
| what 'responsible AI' really meant in the context of Mozilla
| Foundation efforts in that direction.
|
| Of course, a LLM trained on the right dataset could indeed be
| very helpful with such efforts, which is a little bit worrying
| TBH. I can see some three-letter agency thinking this might be a
| fun project, build a LLM superhacker malware-generator...
| essentially the Pupppetmaster plot line from Ghost in the Shell.
| Has anyone been asking the NSA / CIA etc. about their views and
| practices on responsible AI?
| moffkalast wrote:
| > Responsible AI Challenge (impossible)
|
| There, more accurate. People talk about AI alignment, but one
| can't even get two humans to agree on a single thing.
| ben_w wrote:
| Although I would agree with you if they had titled it
| "alignment", they chose "responsible", which is much easier:
| https://foundation.mozilla.org/en/internet-health/trustworth...
|
| (Linked from the text "How does it address our Responsible AI
| Guidelines", I appreciate the irony of me having said this
| given the destination of the link has yet another title).
| PheeThav1zae7fi wrote:
| [dead]
| antibasilisk wrote:
| >try not to destroy humanity challenge (impossible)
| freehorse wrote:
| I do like mozilla foundation in general, but everybody is
| supposed to work on "responsible AI" while nobody can really say
| what a "responsible AI" is really supposed to be, at least not in
| any way that different groups agree. The hardest issue regarding
| "AI alignment" is human alignment.
| gyudin wrote:
| Whatever Bay Arean mega-corps profiting social bubble tells you
| it is. Everything else is UNACCEPTABLE!
| version_five wrote:
| Yeah unfortunately it often ends up being a code for adjusting
| ML models to support certain world views or political biases.
|
| It's too bad we haven't been able to separate the data science
| questions of how we feel about the training data, from the
| operational questions of whether (a) it's appropriate to make a
| determination algorithmically and (b) whether the specific
| model is suited to that decision. Instead we get vague
| statements about harms and biases.
| avgcorrection wrote:
| This is like any "X for humans" or "humane X"; completely
| devoid of meaning.
| haswell wrote:
| > _everybody is supposed to work on "responsible AI" while
| nobody can really say what a "responsible AI" is really
| supposed to be_
|
| In my opinion, "working on" responsible AI at this stage is
| synonymous with figuring out how to actually define what that
| means. Part of that definition will emerge along with and as
| the technology evolves. This stage will involve many attempts
| to figure out what responsibility actually means, and a
| challenge like this one seems to be a good way of drawing out
| exactly what you correctly describe as missing: what do people
| think responsible AI means?
|
| I share the frustration that we don't have human alignment on
| this, and that such alignment is required, but to achieve that,
| people involved need to start putting real thought into
| formulating _some_ notion of what this means, because even if
| we don 't know if we're currently in the right ballpark, we do
| know that the failure modes can be catastrophic.
|
| Human alignment is not something that will happen without
| major/messy disagreements and conflict about what
| responsibility actually entails. And to have those
| disagreements, companies building these products need to start
| standing up and staking claims on what they believe it to mean.
|
| So in my view, what Mozilla is doing here seems like an
| important piece of the puzzle in this moment where what we need
| most are opinions about what safety entails, so we can even
| have a chance of moving towards alignment.
| 13years wrote:
| > The hardest issue regarding "AI alignment" is human
| alignment.
|
| Which is partly why the current proposed alignment theory isn't
| possible. We want to align the AGI by applying human values.
| Even if we figure out how to get the machine to adopt such
| values, they are the same values that lead us humans into
| constant conflict.
|
| I've stated this argument in much more detail here -
| https://dakara.substack.com/p/ai-singularity-the-hubris-trap
| drusepth wrote:
| During the application, they break down what they mean by
| "responsible AI" to mean:
|
| > Agency: Is your AI is designed with personal agency in mind?
| Do people have control over how they use the AI, over how their
| data is used, and over the algorithm's output?
|
| > Accountability: Are you providing transparency into how your
| AI systems work, are you set up to support accountability when
| things go wrong?
|
| > Privacy: How are you collecting, storing and sharing people's
| data?
|
| > Fairness: Are your computational models, data, and frameworks
| reflecting or amplifying existing bias, or assumptions
| resulting in biased or discriminatory outcomes, or have
| outsized impact on marginalized communities. Are computing and
| human labor used to build your AI system vulnerable to
| exploitation and overwork? Is the climate crisis being
| accelerated by your AI through energy consumption or speeding
| up the extraction of natural resources.
|
| > Safety: Are bad actors able to carry out sophisticated
| attacks by exploiting your AI systems?
|
| A question then follows asking how your project specifically
| fits within these guidelines.
| paulddraper wrote:
| > reflecting or amplifying existing bias, or assumptions
| resulting in biased or discriminatory outcomes, or have
| outsized impact on marginalized communities
|
| (*_*)
| thomastjeffery wrote:
| Are we talking about _algorithms_ or _AI_?
|
| An _algorithm_ is a set of _predetermined_ logic. The thing
| itself does not "make decisions", it _applies_ decisions
| that were already made by writing it.
|
| An _AI_ is an _Artificial Intelligence_. A non-human thinker.
| Something that _can_ "make decisions". Such a thing does not
| exist.
|
| ---
|
| This is the problem with "AI research". It's sensible as a
| category of _pursuit_ , but until you have accomplished that
| pursuit, there literally does not exist a single instance of
| _an AI_.
|
| Somehow, that distinction has been ignored from the word
| "go". Every project in the _pursuit_ of AI is itself already
| called _an AI_! No wonder people are so confused!
|
| It's plain to see that all of this fear and uncertainty could
| be cleared up with a simple change in nomenclature. Stop
| calling projects "AI", and we can all move on from this silly
| debate.
| Avicebron wrote:
| I fully second this sentiment. But we know people will
| fight tooth and nail over the gilding that masks their
| normalcy.
| boringuser2 wrote:
| This statement is incredibly biased -- dripping with it:
|
| "Are your computational models, data, and frameworks
| reflecting or amplifying existing bias, or assumptions
| resulting in biased or discriminatory outcomes, or have
| outsized impact on marginalized communities. Are computing
| and human labor used to build your AI system vulnerable to
| exploitation and overwork? Is the climate crisis being
| accelerated by your AI through energy consumption or speeding
| up the extraction of natural resources."
| sgift wrote:
| > This statement is incredibly biased
|
| That is correct. But the question is: Why is that a
| problem? Biased against exploitation and overwork is good.
| Biased against accelerating the climate crisis is good.
| Biased against discrimination is good. I fail to see which
| of these biases is bad here.
| Avicebron wrote:
| Because there isn't a universal truth ), at least if
| there is we as a species don't (can't know it) especially
| as it relates to how we all interact not only with each
| other but the planet, etc. You're version of good is
| another's version of bad, if we can't have neutrality,
| we're just building another machine to amplify whichever
| group builds it's values and right or wrong just depends
| on where you stand.
|
| To break it down, do we want to be neutral or do we want,
| SiliconValleyGPT? What happens when instead of that we
| get SaudiArabiaGPT? Or ChinaGPT? Or RussiaGPT?
| DeepSouthGPT? I just picked arbitrary places but you see
| my point, I hope.
| danShumway wrote:
| These kinds of philosophy discussions are frustratingly
| restricted to bias against minorities.
|
| Nobody here commented on the "AI should protect your
| privacy" tenant with "but how do we know privacy is
| _good_? What if my definition of privacy is different
| from yours? What happens when a criminal has privacy? "
| Nobody wanted a concrete definition of agency from first
| principles, nobody wanted to talk about the intersection
| of agency and telos.
|
| "There's no universal truth" is basically an argument
| against "responsible" AI in the first place, since there
| would be no universal truth about what "responsibility"
| means. Mozilla's statement about responsible AI is
| inherently biased towards their opinion of what
| responsibility is. But again, the bias accusations only
| popped up on that last point. We're all fine with Mozilla
| having opinions about "good" and "bad" states of the
| world until it has opinions about treating minorities
| equitably, then it becomes pressingly important that we
| have a philosophy discussion.
| throwaway322112 wrote:
| > We're all fine with Mozilla having opinions about
| "good" and "bad" states of the world until it has
| opinions about treating minorities equitably, then it
| becomes pressingly important that we have a philosophy
| discussion.
|
| It's because that was the only thing on the list that is
| openly discriminatory.
|
| If the intent was truly to avoid unfair bias against
| people, the mention of marginalized communities would be
| unnecessary. By definition, avoiding bias should be a
| goal that does not require considering some people or
| groups differently than others.
|
| The fact one set of groups is called out as being the
| primary consideration for protection makes it clear that
| the overriding value here is not to avoid bias
| universally, but rather to consider bias against
| "marginalized communities" to be worse than bias against
| other people.
|
| Since the launch of ChatGPT, plenty of conservatives have
| made bias complaints about it. The framework outlined by
| Mozilla gives the strong impression that they would
| consider such complaints to be not as important, or maybe
| not even a problem at all.
| Avicebron wrote:
| On the contrary, I think saying "there is no universal
| truth" is a foundation of for those discussions of first
| principles.
|
| I wasn't arguing against "responsible ai", I was replying
| to someone who made their implicit assumptions clear,
| even if I agreed with who I was responding to, which I do
| for the most part, I was trying to dig down to the
| granularity of their assertions. Because it's easy to
| make sweeping statements about what's 'good' and 'bad'
| (but who makes those distinctions in which context is
| more important than just saying it's one or the other).
|
| I didn't bring up anything to do with minorities at all,
| following my logic, the question is, "which minorities
| and where?" It's in line with what you say about privacy,
| "who's privacy, what's their definition of it"
| MrJohz wrote:
| There's an assumption there that a neutral AI can exist,
| but I think a lot of people would challenge that central
| assumption. No set of training data can be truly neutral
| or unbiased. It may be well balanced between certain
| groups, but the choice of which groups to balance, how
| much to balance them, and the choice to add this balance
| in the first place are all ideological decisions that
| stem from certain beliefs and ideas.
|
| The AIs that get built will reflect the ideologies and
| values of the people who build them. It is therefore
| better for us ethically to be conscious of the values we
| are injecting.
| the_third_wave wrote:
| Because it implies the "climate crisis" is a real thing.
| For some it surely is, others - me among them - see this
| differently. Time will tell who got it right but just the
| fact that the media more or less dropped the "climate"
| scare when SARS2 hit the front pages should give those
| who are in the former camp something to think about. Only
| when it became clear that there was no more to be gained
| from pushing SARS2 scare stories did they return to the
| climate narrative. A LLM which has been trained to push
| "climate crisis" will end up producing agitprop [1]
| instead of objective output. The same would be true for a
| model which has been trained to deny anything related to
| climate change but thus far I have not seen any call for
| such training methods to be used.
|
| [1] https://www.britannica.com/topic/agitprop
| boringuser2 wrote:
| Look at where your failure lies here -- you literally
| just asserted your biases as "good" and said you failed
| to see another perspective.
|
| That's literally the point.
| sgift wrote:
| I didn't fail to see another perspective. I rejected the
| other positions as worse (after careful examination).
| That's different.
|
| Also, neutrality is for the most part just a status quo
| bias. "Things are good as they are" is a position as much
| as every other.
| none_to_remain wrote:
| Zero distinction between "is" and "ought"
| kokanee wrote:
| Your point seems to be that AI output should not be
| moderated. This would mean that the AI would adopt
| whatever biases and language patterns exist in the
| training data. In that scenario, the AI developer is
| still injecting bias by selecting which training data to
| use. There's also the problem that any unmoderated AI
| would be completely commercially unviable, of course. So,
| I think I understand what you're opposed to, but I'm
| curious what actions/methodologies you would be in favor
| of.
| version_five wrote:
| When you clearly indicate you're not neutral, you lose
| all credibility. Nobody wants a ML model that gives them
| the climate warrior version of the "truth". Neutrality is
| extremely important in order to be broadly taken
| seriously. It's exactly the kind of criticism that's been
| leveled against chatGPT
| kokanee wrote:
| Ah yes, I'll just reference the list of my non-neutral
| biases as I choose the moderation rules for my AI. My
| bias against swear words is neutral, so I will include
| that rule, but my bias against pollution is not neutral,
| so I will skip those moderation rules.
|
| Obviously categorizing beliefs into "neutral" and "not
| neutral" is impossible. Your statement is a classic
| example of the false consensus effect -- everyone thinks
| they are neutral.
|
| https://en.wikipedia.org/wiki/False_consensus_effect
| notahacker wrote:
| The irony with people talking about "neutrality" is the
| tendency of the people making such demands to be even
| more obsessed with distorting the input data to produce
| outcomes censored to take into account their viewpoint
| than the 'AI safety' and PR people.
|
| I mean, how much censorship (or artificial curation)
| would you need to avoid an ML model giving "the climate
| warrior version" of questions about whether the world was
| getting warmer?!
| jamilton wrote:
| The other statements are biased too, but they're biased in
| favor of privacy and transparency. Being biased, here, just
| means having values and applying them - do you disagree
| with having values or the values themselves?
| dmix wrote:
| It's Mozilla, what do you expect? That's their whole
| schtick these days.
| [deleted]
| [deleted]
___________________________________________________________________
(page generated 2023-03-31 23:01 UTC)