[HN Gopher] Employers' Use of AI Tools Can Violate the Americans...
___________________________________________________________________
Employers' Use of AI Tools Can Violate the Americans with
Disabilities Act
Author : pseudolus
Score : 228 points
Date : 2022-05-13 09:15 UTC (13 hours ago)
(HTM) web link (www.justice.gov)
(TXT) w3m dump (www.justice.gov)
| golemotron wrote:
| Isn't the entire point of an AI tool 'discrimination'?
| pipingdog wrote:
| ML is a technique for discovering and amplifying bias.
|
| Applying ML to hiring shows a profound lack of awareness of
| both ML and HR. Especially using previous hiring decisions as a
| training set. Like using a chainsaw to fasten trees to the
| ground.
| theptip wrote:
| Like many words in the English language, "discrimination" has
| multiple meanings.
|
| From Webster:
|
| 1. The act of discriminating.
|
| 2. The ability or power to see or make fine distinctions;
| discernment.
|
| 3. Treatment or consideration based on class or category, such
| as race or gender, rather than individual merit; partiality or
| prejudice.
|
| You are talking about 2. The article is talking about 3.
|
| 3. is illegal in hiring. 2. is not.
| golemotron wrote:
| If you make a decision based on 2, you are doing 3.
|
| It's just that simple. 2 creates categories implicitly.
| theptip wrote:
| That is not how the courts interpret it.
| golemotron wrote:
| If the ultimate standard is disparate impact, that's
| where it goes.
| jijji wrote:
| I encountered this recently on Facebook Marketplace. I post ads
| for houses for rent, and the ads say "no pets". This has been
| fine for 20+ years on craigslist, but on facebook marketplace the
| minute some guy writes that he "has a service animal" and you
| don't respond the right way, your ad gets blocked/banned.... You
| basically have to accept these people even though the law allows
| you to prohibit animals, service animals must be accepted
| otherwise you violate the ADA. I knew this guy when I was living
| in Sunnyvale he had a cat that was a registered service animal,
| and he would get kicked out of every hotel he went to, because
| they dont allow animals/pets, and then he would sue the owner
| under ADA laws and collect ~40k from each hotel owner. Its a real
| racket.
| [deleted]
| judge2020 wrote:
| > I knew this guy when I was living in Sunnyvale he had a cat
| that was a registered service animal,
|
| > Beginning on March 15, 2011, only dogs are recognized as
| service animals under titles II and III of the ADA.
|
| https://www.ada.gov/service_animals_2010.htm
| [deleted]
| theptip wrote:
| One point that I think is under-discussed in the AI bias area:
|
| While it is true that using an algorithmic process to select
| candidates may introduce discrimination against protected groups,
| it seems to me that it should be much easier to detect and prove
| than with previous processes with human judgement in the loop.
|
| You can just subpoena the algorithm and then feed test data to
| it, and make observations. Even feed synthetic data like swapping
| in "stereotypically black" names for real resumes of other races,
| or in this case adding "uses a wheelchair" to a resume. (Of
| course in practice it's more complex but hopefully this makes the
| point.)
|
| With a human, you can't really do an A/B test to determine if
| they would have prioritized a candidate if they hadn't included
| some signal; it's really easy to rationalize away discrimination
| at the margins.
|
| So while most AI/ML developers are not currently strapping their
| models to a discrimination-tester, I think the end-state could be
| much better when they do.
|
| (I think a concrete solution would be to regulate these models to
| require a certification with some standardized test framework to
| show that developers have actually attempted to control these
| potential sources of bias. Google has done some good work in this
| area: https://ai.google/responsibilities/responsible-ai-
| practices/... - though there is nothing stopping model-sellers
| from self-regulating and publishing this testing first, to try to
| get ahead of formal regulation.)
| TimPC wrote:
| There is a very real danger of models being biased in a way
| that doesn't show up when you apply these crude hacks to
| inputs. It seems to me we have to be much more deliberate, much
| more analytical, and much more thorough in testing models if we
| want to substantially reduce or even eliminate discrimination.
|
| Yes, you can A/B test the model if you can design reasonable
| experiments. You still don't have the general discrimination
| test because you have to define what a reasonable input
| distribution and what reasonable outputs are.
|
| If an employer is looking to hire an engineer with a CS degree
| from a top-tier university, and they use an AI model to
| evaluate resumes and it returns a number of successes on black
| people very similar to the population distribution of graduates
| from those programs is the model discriminatory?
|
| There are still hard problems here because any natural baseline
| you use for a model may in fact be wrong and designing a
| reasonable distribution of input data is almost impossibly hard
| as well.
| theptip wrote:
| Yes, in practice it's actually way more complex than I
| gestured at. The Google bias toolkit I linked does discuss in
| much more detail, but I am not a data scientist and haven't
| used it; I'd be interested in expert opinions. (They also
| have some very good non-technical articles discussing the
| general problems of defining "fairness" in the first place.)
| [deleted]
| MontyCarloHall wrote:
| I agree that discrimination would be a lot easier to
| objectively prove after the fact, but it also would be far
| easier to occur in the first place, since many hiring managers
| would blindly "trust the AI" without a second thought.
| theptip wrote:
| Definitely could be so, particularly in these early days
| where frameworks and best-practices are very immature.
| Inasmuch as you think this is likely, I suspect you should
| favor regulation of algorithmic processes instead of
| voluntary industry best-practices.
| fshbbdssbbgdd wrote:
| From my experience working on projects where we trained
| models, usually it's obviously completely broken the first
| attempt and requires a lot of iteration to get to a decent
| state. "Trust the AI" is not a phrase anyone involved would
| utter. It's more like: trust that it is wrong for any edge
| case we didn't discover yet. Can we constrain the possibility
| space any more?
| pc86 wrote:
| Most hiring managers wouldn't make it to the end of the
| phrase "constrain the possibility space"
| MichaelBurge wrote:
| "Trust the AI" could mean uploading a resume to a website
| and getting a "candidate score" from somebody else's model.
|
| Because I'll tell you, there's millions of landlords and
| they blindly trust FICO when screening candidates. Maybe
| not as the only signal, but they do trust it without
| testing it for edge cases.
| indymike wrote:
| The problem with AI is that when it does make discriminatory
| decisions on hiring, is that it does so systematically and
| mechanically. Incidentally, systematic and discrimination are
| two words you never want to see consecutively on a letter from
| the EEOC or OFCCP.
| TimPC wrote:
| The reason you never want to see those words together is that
| isolated discrimination may result in a single lawsuit but
| systemic discrimination is a basis for class action.
| mjburgess wrote:
| It's under-discussed as with any discussion of an empirical
| study of ML systems, ie., treating them as targets of analysis.
|
| As soon as you do this, they're revealed to exploit only
| statistical coincidences and highly fragile heuristics embedded
| within the data provided. And likewise, pretty universally
| discriminatory when human data is inovlved
| slg wrote:
| >With a human, you can't really do an A/B test to determine if
| they would have prioritized a candidate if they hadn't included
| some signal; it's really easy to rationalize away
| discrimination at the margins.
|
| Which is part of the reason that discrimination doesn't have to
| be intentional for it to be punishable. This is a concept known
| as "disparate impact". The Supreme Court has issued
| decisions[1] that a policy which negatively impacts a protected
| class and has no justifiable business related reason for
| existing can be deemed discrimination regardless of the
| motivations behind that policy.
|
| [1] - https://en.wikipedia.org/wiki/Griggs_v._Duke_Power_Co.
| darawk wrote:
| He didn't say anything about intention, though. He just
| talked about the counterfactual. Disparate impact is about
| the counterfactual scenario.
| slg wrote:
| They said "it's really easy to rationalize away
| discrimination at the margins." My reply was pointing out
| that there is little legal protection in rationalizing away
| discrimination at the margins because tests for disparate
| impact require the approach to also stand up holistically
| which can't easily be rationalized away.
| darawk wrote:
| Yes, but a holistic test requires a realistic
| counterfactual. That's the problem. There is no way to
| evaluate that counterfactual for a human interviewer.
|
| It is true that extreme bias/discrimination will be
| evident, but smaller bias/discrimination, particularly in
| an environment where the pool is small (say, black women
| for engineering roles) is extremely hard to prove for a
| human interviewer. Your sample size is just going to be
| too small. On the other hand, if you have an ML
| algorithm, you can feed it arbitrary amounts of synthetic
| data, and get precise loadings on protected attributes.
| theptip wrote:
| I think perhaps you are looking at a different part of
| the funnel; disparate impact seems to be around the sort
| of requirements you are allowed to put in a job
| description. Like "must have a college degree".
|
| However the sort of insidious discrimination at the
| margin I was imagining are things like "equally-good
| resumes (meets all requirements), but one had a
| female/stereotypically-black name". Interpreting resumes
| is not a science and humans apply judgement to pick which
| ones feel good, which leaves a lot of room for hidden
| bias to creep in.
|
| My point was that I think algorithmic processes are more
| testable for these sorts of bias; do you feel that
| existing disparate impact regulations are good at
| catching/preventing this kind of thing? (I'm aware of
| some large-scale research on name-bias on resumes but it
| seems hard to do in the context of a single company.)
| Ferrotin wrote:
| Everything has a disparate impact, so now everything is
| illegal.
| pc86 wrote:
| Might I suggest: https://www.merriam-
| webster.com/dictionary/term%20of%20art
| Ferrotin wrote:
| No, you're wrong here, it's not a term of art.
| pc86 wrote:
| It sure looks like it is.
|
| > _Disparate impact in United States labor law refers to
| ..._
|
| https://en.wikipedia.org/wiki/Disparate_impact
| Ferrotin wrote:
| Your link is evidence for my side. It uses the plain
| definition. The plain meaning of the words.
| pc86 wrote:
| My point is that by the plain meaning of words you're
| right, disparate impact means any two groups impacted
| differently, regardless of anything else. In law, it
| means that an employment, housing, etc. policy has a
| disproportionately adverse impact on members of a
| protected class compared to non-members of that same
| class. It's much more specific and narrowly defined.
| kelseyfrog wrote:
| If you ever intent to study law, become involved in a
| situation dealing with disparate impact, or are at the
| receiving end of disparate impact, knowing the legal
| definition may be helpful too. The DoJ spells[1] out the
| legal definition of disparate impact as so:
| ELEMENTS TO ESTABLISH ADVERSE DISPARATE IMPACT UNDER
| TITLE VI Identify the specific policy or
| practice at issue; see Section C.3.a. Establish
| adversity/harm; see Section C.3.b. Establish
| disparity; see Section C.3.c. Establish
| causation; see Section C.3.d.
|
| 1. https://www.justice.gov/crt/fcs/T6Manual7#D
| TimPC wrote:
| Justifiable business reason is still a strong bar. For
| example, with no evidence in either direction for a claim
| there is no justifiable business reason even if the claim is
| somewhat intuitive. So if you want to require high-school
| diplomas because you think people who have them will do the
| job better you better track that data for years and be
| prepared to demonstrate it if sued. If you want to use IQ
| tests because you anticipate smarter people will do the job
| better you better have IQ tests done on your previous
| employee population demonstrating the correlation before
| imposing the requirement.
| cmeacham98 wrote:
| EDIT: my parent edited and replaced their entire comment,
| it originally said "you can't use IQ tests even if you
| prove they lead to better job performance". I leave my
| original comment below for posterity:
|
| This is not true, IQ tests in the mentioned Griggs v. Duke
| Power Co. (and similar cases) were rejected as disparate
| impact specifically because the company provided no
| evidence they lead to better performance. To quote the
| majority opinion of Griggs:
|
| > On the record before us, neither the high school
| completion requirement nor the general intelligence test is
| shown to bear a demonstrable relationship to successful
| performance of the jobs for which it was used. Both were
| adopted, as the Court of Appeals noted, without meaningful
| study of their relationship to job performance ability.
| [deleted]
| tgsovlerkhgsel wrote:
| Wouldn't that be trivial if you have your training data
| set?
| TimPC wrote:
| I don't think it's adequate to attempt to prevent
| discrimination. Discrimination is core to our fundamental human
| rights. It's necessary to succeed at preventing discrimination.
|
| "We applied best practices in the field to limit
| discrimination" should not be an adequate legal defence if the
| model can be shown to discriminate.
|
| To clarify further, just because you tried to prevent
| discrimination doesn't mean you should be off the hook for the
| material harms of discrimination to a specific individual.
| Otherwise people don't have a right to be protected against
| discrimination they only have a right to people 'trying' to
| prevent discrimination. We shouldn't want to weaken rights that
| much even if it means we have to be cautious in how we adopt
| new technologies.
| Manuel_D wrote:
| > With a human, you can't really do an A/B test to determine if
| they would have prioritized a candidate if they hadn't included
| some signal; it's really easy to rationalize away
| discrimination at the margins.
|
| Not for individual candidates, no. But you can introduce a
| parallel anonymized interview process and compare the results.
| TimPC wrote:
| Actually you kind of can't. You don't have a legal basis for
| forcing the company to run that experiment.
| theptip wrote:
| The linked article gives some examples that I think are very
| useful clarifications:
|
| https://www.eeoc.gov/tips-workers-americans-disabilities-act...
|
| > The format of the employment test can screen out people with
| disabilities [for example:] A job application requires a timed
| math test using a keyboard. Angela has severe arthritis and
| cannot type quickly.
|
| > The scoring of the test can screen out people with disabilities
| [for example:] An employer uses a computer program to test
| "problem-solving ability" based on speech patterns for a
| promotion. Sasha meets the requirements for the promotion. Sasha
| stutters so their speech patterns do not match what the computer
| program expects.
|
| Interestingly, I think the second one is problematic for common
| software interview practices. If your candidate asked for an
| accommodation (say, no live rapid-fire coding) due to a
| recognized medical condition, you would be legally required to
| provide it.
|
| This request hasn't come up for me in all the (startup) hiring
| I've done, but it could be tough to honor this request fairly on
| short notice, so worth thinking about in advance.
| etchalon wrote:
| If someone presented me with a speed-timed programming
| exercise, I'd walk out the door.
| mechanical_bear wrote:
| I walk for any code monkey hoop jump exercises. Timed or not.
|
| When you apply to be a carpenter they don't make you hammer
| nails, when you apply to be a accountant they don't have you
| prepare a spreadsheet for them, etc.
|
| I don't work (even in interviews) for free.
| tgsovlerkhgsel wrote:
| I don't mind them. I expect any company worth a damn to
| want to screen out people who can't code. When I work there
| and interview other candidates, I don't want my time to be
| wasted, and I don't want to work with people who can't do
| their job.
|
| A quick coding test is something that any places where
| people should know how to code has to do, doing it through
| one of those platforms seems perfectly reasonable, and I'm
| happy to do it.
|
| Writing fizzbuzz is not "working for free" any more than
| any other form of interviews.
|
| And is the "when you apply to be a carpenter" sentence
| really true? I've heard of the interview process for
| welders being "here's a machine and two pieces of metal,
| I'll watch".
| theptip wrote:
| Any in-person coding exercise with a time-box (say, the
| standard one-hour slot) is "timed" in some sense. I don't
| think we always consider it as such, but if you can't type
| fast due to arthritis it could definitely be problematic.
| choppaface wrote:
| I once interviewed with a cast and two different YC start-ups
| gave me speed coding problems. One even made me type with their
| laptop versus a split keyboard I had where I could actually
| reach all the keys. They used completion time as a metric even
| though I asked for an accommodation and it was obvious as I
| typed in front of them that the cast was major drag on me.
|
| Pretend your colleague had a cast and couldn't type for a few
| weeks. Is that person going to get put on the time-sensitive
| demo where 10k SLOC need to be written this week? Or the design
| / PoC project that much less SLOC but nobody knows if it will
| work? Or the process improvement projects that require a bunch
| of data mining, analysis, and presentation?
|
| It's not hard to find ways to not discriminate against
| disabilities on short notice. The problem is, at least in my
| experience with these YC start-ups who did not, there's so much
| naivete combined with self-righteousness that they'd rather
| just bulldoze through candidates like every other problem they
| have.
| alar44 wrote:
| What if the job requires you to type quickly? Why would someone
| with arthritis even want a job where you have to type quickly?
| Is that really discrimination or is that the candidate simply
| not being able to perform the job?
| robonerd wrote:
| What you are describing is called a Bona Fide Occupational
| Qualification (BFOQ). The specifics of what sort of
| attributes might be covered for what jobs is something courts
| hash out, but broadly: if you're hiring workers for a
| warehouse it's fine to require workers be able to lift boxes.
| If you're hiring airline pilots, it's fine to turn away blind
| people. Etc.
| [deleted]
| MrStonedOne wrote:
| scollet wrote:
| When job requirements actually match the job, then you can
| worry about this.
| avgcorrection wrote:
| Think about what you just wrote. This is a programming job,
| not something like a transcriptionist gig. Why do you feel
| that your "what if" is appropriate?
|
| Besides, the point seems to have been about interview
| practices. You know, those practices which are often quite
| removed from the actual on-the-job tasks.
|
| What if I was disabled to the degree that I couldn't leave
| the house, but I could work remotely (an office job)? That's
| what accomodations are for.
| TimPC wrote:
| If the job actually requires typing quickly like a Court
| Recorder then there is a basis to require typing quickly. If
| the job doesn't actually require it, like for example a
| programmer then enforcing the requirement anyway is
| discrimination.
| xyzzyz wrote:
| Most jobs that involve typing benefit from being able to
| type quickly.
|
| For example, I am a frequent customer of U-Haul. I learned
| to not use the branch that's closest to me, because some
| employees there are really slow with computers, which makes
| checking out equipment very slow, and frequently results in
| a long line of waiting customers. Driving 5 extra minutes
| saves me 20 minutes of waiting for employees to type in
| everything and click through the system.
|
| And this is freaking _uhaul_. If you're a software
| engineer, slow typing is also a productivity drain: a 3
| minutes email becomes 6 minutes one, a 20 minutes slack
| conversation becomes 30 minutes etc. It all adds up.
| Kon-Peki wrote:
| > Most jobs that involve typing benefit from being able
| to type quickly.
|
| Maybe. If you type 10,000 words per minute but your
| entire module gets refactored out of the codebase next
| week, is your productivity anything higher than 0?
|
| Multiple times in my career, months or even years worth
| of my team's work was tossed in the trash because some
| middle manager decided to change directions. A friend of
| mine is about ready to quit at AMZN because the product
| he was supposed to launch last year keeps getting delayed
| so they can rewrite pieces of it. Maybe some people
| should have thought more and typed less.
| xyzzyz wrote:
| > Maybe. If you type 10,000 words per minute but your
| entire module gets refactored out of the codebase next
| week, is your productivity anything higher than 0?
|
| If you spent less time typing that module that later went
| to trash, you are, in aggregate, more productive than
| someone who spent more time typing the same module.
|
| This sort of argument only makes sense if you assume that
| there is some sort of correlation, where people who are
| slower at typing are more likely to make better design or
| business decisions, all else being equal. I certainly
| have no reason to believe it to be true. Remember we are
| talking about the issue in context of someone who is slow
| at typing because of arthritis. Does arthritis make
| people better at software design, or communication? I
| don't think so.
| TimPC wrote:
| Small productivity drains on minority portions of the
| task are not a requirement of doing the job. Software
| developers generally spend more time thinking than
| typing. Typing is not the bottleneck of the job (at least
| for the vast majority of roles).
| xyzzyz wrote:
| Sure, of course typing is not the biggest bottleneck in
| software engineer job. That doesn't mean it's irrelevant
| for productivity.
|
| Consider another example: police officers need to do a
| lot of typing to create reports. A fast typing officer
| can spend less time writing up reports, and more time
| responding to calls. That makes him more productive, all
| else being equal. Of course it would be silly to consider
| typing speed as a sole qualification for a job of police
| officer (or, for that matter, a software engineer), but
| it is in no way unreasonable to take it into account when
| hiring.
| wbl wrote:
| Dragon Naturally Speaking is the definition of a reasonable
| accommodation. Maybe not a court transcriptionist but almost
| all jobs with typing would be fine with it.
| TimPC wrote:
| I think the general problem is the law says certain correlations
| are fair to use and others are not. If you can prove the AI model
| has no way to separate out which is which you have a fairly
| sizeable amount of evidence the AI is discriminating. Likely
| enough evidence for a civil case.
|
| Usually showing that input data is biased in some way or contains
| a potentially bad field will result in winning a discrimination
| case.
|
| If neither side can conclusively prove what the model is doing
| but the plaintiff shows it was trained on data that allows for
| discrimination and the model is designed to learn patterns in its
| training data then the defendant is on the hook for showing the
| model is unbiased. For the most part people design input data
| uncritically and some of the fields allow for discrimination.
| supergeek133 wrote:
| This sort of reminds me of the story about an HR algorithm that
| ended up being discriminatory because it was trained using
| existing/past hiring data.. so it was biased toward white men.
|
| Was it Amazon?
|
| Anyway, this feels different to me, IIRC you can't ask disability
| related questions in hiring aside from the "self identify" types
| at the end? So how would a ML model find applicants with any kind
| of disability unless it was freely volunteered in a resume/CV?
|
| Or is that the advisory? "Don't do this?"
| [deleted]
| burkaman wrote:
| https://beta.ada.gov/ai-guidance/
|
| > For example, some hiring technologies try to predict who will
| be a good employee by comparing applicants to current
| successful employees. Because people with disabilities have
| historically been excluded from many jobs and may not be a part
| of the employer's current staff, this may result in
| discrimination.
|
| > For example, if a county government uses facial and voice
| analysis technologies to evaluate applicants' skills and
| abilities, people with disabilities like autism or speech
| impairments may be screened out, even if they are qualified for
| the job.
|
| > For example, an applicant to a school district with a vision
| impairment may get passed over for a staff assistant job
| because they do poorly on a computer-based test that requires
| them to see, even though that applicant is able to do the job.
|
| > For example, if a city government uses an online interview
| program that does not work with a blind applicant's computer
| screen-reader program, the government must provide a reasonable
| accommodation for the interview, such as an accessible version
| of the program, unless it would create an undue hardship for
| the city government.
| light_hue_1 wrote:
| > So how would a ML model find applicants with any kind of
| disability unless it was freely volunteered in a resume/CV?
|
| In machine learning this happens all the time! Stopping models
| from learning this from the most surprising sources is an
| active area of research. Models are far more creative in
| finding these patterns than we are.
|
| It can learn that people with disabilities tend to also work
| with accessibility teams. It can learn that you're more likely
| to have a disability if you went to certain schools (like a
| school for the blind, even if you and I wouldn't recognize the
| name). Or if you work at certain companies or colleges who
| specialize in this. Or if you publish an article and put it on
| your CV. Or if you link to your github and the software looks
| there as well for some keywords. Or if among the keywords and
| skills that you have you list something that is more likely to
| be related to accessibility. I'm sure these days software also
| looks at your linkedin, if you are connected with people who
| are disability advocates you are far more likely to have a
| disability.
|
| > Or is that the advisory? "Don't do this?"
|
| Not so easy. Algorithms learn this information internally and
| then use it in subtle ways. Like they might decide someone
| isn't a good fit and that decision may in part be correlated
| with disability. Disability need not exist anywhere in the
| system, but the system has still learned to discriminate
| against disabled people.
| padolsey wrote:
| _how would a ML model find applicants with any kind of
| disability unless it was freely volunteered in a resume /CV?_
|
| A few off the top of my head:
|
| (1) Signals gained from ways that a CV is formatted or written
| (e.g. indicating dyslexia or other neurological variances,
| especially those comorbid with other physiological
| disabilities)
|
| (2) If a CV reports short tenure at companies with long breaks
| in between (e.g. chronic illnesses or flare-ups leading to
| burnout or medical leave)
|
| (3) There are probably many unintuitive correlates irt
| interests, roles acquired, skillsets. Consider what
| experiences, institutions, skillsets and roles are more or less
| accessible to disabled folk than others.
|
| (4) Most importantly: Disability is associated with lower
| education and lower economic opportunity, therefore supposed
| markers of success ("merit") in CVs may only reflect existing
| societal inequities. *
|
| * _This is one of the reasons meritocratic "blind" hiring
| processes are not as equitable as they might seem; they can
| reflect + re-entrench the current inequitable distribution of
| "merit"._
| b65e8bee43c2ed0 wrote:
| >* This is one of the reasons meritocratic "blind" hiring
| processes are not as equitable as they might seem; they can
| reflect + re-entrench the current inequitable distribution of
| "merit".
|
| they are not meant to be "equitable". they're meant to
| provide equality of opportunity, not equality of outcome
| padolsey wrote:
| Oh agreed! Sorry about mixed terminology. Though they don't
| really provide "equality of opportunity" either :/ People
| w/ more privelege, at the starting line, will have more
| supposed 'merit' and therefore the CV-blindness only
| reflects existing inequalities from wider society. A
| different approach might be quotas and affirmative action.
| TimPC wrote:
| I think the poster is arguing that the things we call
| merit reflects the ability to do the job well. Any system
| of hiring has to consider the ability to hire the best
| person for the job. Quotas are an open-admission we can
| no longer do this. Affirmative action is trickier as some
| affirmative action can be useful in correcting bias and
| can actually improve hiring. Too much once again steers
| us away from the best person for the job.
|
| This is important and tricky as if we have across the
| board decreases in hiring the best person for the job we
| end up with a less productive economy. This means our
| hiring practices directly compete against other aims like
| solving poverty.
| MontyCarloHall wrote:
| >If a CV reports short tenure at companies with long breaks
| in between (e.g. chronic illnesses or flare-ups leading to
| burnout or medical leave)
|
| This is a case where it may benefit a candidate to disclose
| any disabilities leading to such an erratic employment
| pattern. I don't proceed with candidates who cannot explain
| excessively frequent job hops because it signals that they
| can't hold a job due to factors I'd want to avoid hiring,
| like incompetence or a difficult personality. It's a totally
| different matter if the candidate justified their erratic
| employment due to past medical issues that have since been
| treated.
| padolsey wrote:
| >medical issues that have since been treated
|
| And what if they haven't been? Disability isn't usually a
| temporary thing or even necessarily medical in nature
| (crucial to see disability as a distinct axis from
| illness!). Hiring with biases against career fluctuations
| is, I'm afraid to point out, inherently ableist. And it
| should not be beholden on the individual to map their
| experienced inequities and difficulties across to every
| single employer.
| burkaman wrote:
| I think the point of this guidance is that "hiring AI" is
| not actually intelligent and will not be able to read and
| understand a note about disability on a resume. It will
| just dumbly match date ranges to an ideal profile and throw
| out resumes that are too far off.
| david-cako wrote:
| for instance, I find mass surveillance intolerable and it makes
| me completely uninterested in my work.
| bombcar wrote:
| Even the very requirement to "apply online" has been quite
| effective at making it very difficult for a sub-section of the
| _working_ population to succeed at applying.
|
| There are many (and I know quite a few) people who are quite
| capable at their jobs and entirely computer-ineffective. As
| they're forced more and more to deal with confusing two-factor
| requirements and other computer-related things that we're just
| "used" to, they get discouraged and give up.
|
| For now you can often help them fill it out, but at some point
| that's going to be unwieldy or insufficient.
| frankfrankfrank wrote:
| It is a bit of an aside, but not only do I find it interesting
| that 1) this issue is approaching an interesting nexus of
| computer based efficiency and human "adjustments" (I will just
| call them) like the ADA that are intentionally and even
| deliberately inefficient; and 2) that the efficiency and
| centralization based sector of computer "sciences"/development is
| so replete with extremely contrary types that demand all manner
| of exceptions, exemptions, and make special pleadings.
|
| I find it all very interestingly paradoxical, regardless of
| everything else.
| jaqalopes wrote:
| Current headline is a bit misleading, the point of the article as
| made clear in the very first paragraph is that this is about AI
| _hiring_ tools causing potential discrimination. This has nothing
| to do with AI workers somehow replacing disabled humans, which is
| what it sounds like.
| iso1631 wrote:
| The first paragraph is exactly what I expected from the
| headline, ever since the amazon AI gender discrimination story
| a few years back.
|
| https://www.theguardian.com/technology/2018/oct/10/amazon-hi...
| [deleted]
| adolph wrote:
| I wonder if this government guidance focuses on imperfections in
| products that on the whole may be a significant improvement over
| biases in traditional human screening.
| tj_h wrote:
| "Using AI tools for hiring"...this is when i like to remind
| myself that google, basically at some point in the last 12-24
| months was like "OH CRAP, we forgot to tell our robots about
| black people!". Like, I'm not saying google is at the forefront
| of ML - maybe it is, but it sure as hell is out in front
| somewhere and more to the point most companies are likely _not_
| gonna be using cutting edge technology for this stuff. EVEN
| GOOGLE, admitted their ML for images is poorly optimized for POC
| 's, i hate to think what some random ML algorithm used by company
| X thinks about differently abled peoples
| temp8964 wrote:
| This is unreasonable generalization. Dark skin does have direct
| impact on image processing. Nothing like this exists in hiring.
| lmkg wrote:
| There are companies selling products which screen hiring
| candidates based on video of them talking. Ostensibly for
| determining personality traits or whatever. So yes, this
| literally exists in hiring.
| htrp wrote:
| Mate. you ever hear about amazon's hiring ai?
| smiley1437 wrote:
| While image processing dark skin may not be germane to AIs
| doing hiring, the idea that unintentional discrimination from
| ML models could occur in the context of hiring is certainly
| worth considering and I believe it's the entire point of the
| technical assistance document released today.
| httpsterio wrote:
| Racial bias does exist in ML based image recognition tools,
| there's a plethora of evidence to show that.
|
| https://www.theverge.com/2019/1/25/18197137/amazon-
| rekogniti...
|
| https://www.google.com/amp/s/www.wired.com/story/best-
| algori...
|
| https://algorithmwatch.org/en/google-vision-racism/
|
| https://time.com/5520558/artificial-intelligence-racial-
| gend...
| umvi wrote:
| Is it actually possible to hire someone without some level of
| discrimination involved? Seems like this ideal world where
| candidates are hired purely on technical ability or merits
| without regard to any other aspects of their life is impossible.
|
| For example, if I were hiring a programmer, and the programmer
| was technically competent but spoke with such a thick accent that
| I couldn't understand them very well, I'd be tempted to pass on
| that candidate even though they meet all the job requirements.
| And if it happened every time I interviewed someone from that
| particular region, I'd probably develop a bias against similar
| future candidates.
| Broken_Hippo wrote:
| Probably not simply because we are human, but we can minimize
| some of it.
|
| You wouldn't screen out a person who cannot speak or who cannot
| speak clearly due to a disability of some sort. You'd use a
| different method of communication as would everyone else and it
| could really be the same for them.
|
| On the other hand, if communication was clearly impossible
| and/or they needed to be understood by the public (customers),
| the accent may very well mean they cannot do the job and not in
| the scope of things to teach someone like you can teach
| expectations about customer service.
| throwaway09223 wrote:
| No, it's not possible. Humans have all kinds of inherent
| biases.
|
| The big difference is we can prove the bias in an AI. It's a
| very interesting curveball when it comes to demonstrating
| liability in the choice making process.
| sneak wrote:
| http://www.tnellen.com/cybereng/harrison.html
| charcircuit wrote:
| How would this even happen. Why would people put their
| disabilities on their resume?
| michaelt wrote:
| Skills: Fluent in American Sign Language.
|
| High School: Florida School for the Deaf and the Blind.
|
| Other Experience: President of Yale Disability Awareness Club
| (2009-2011).
| emiliobumachar wrote:
| e.g. if you knew one or two sign languages, wouldn't you list
| it under languages? What if the job involves in-person
| communication with masses of people?
| Miner49er wrote:
| Many (most?) employers ask if you are disabled when filling out
| a job application. I personally don't consider myself disabled,
| but I have one of the conditions that is listed as a disability
| in this question. I never know what to put. I thought it
| wouldn't matter if I just said, yes, that I'm disabled, since I
| literally have one of the conditions listed, but people online
| who work in hiring say I will most likely be discriminated
| against if I do that. Sure, it's illegal, but companies do that
| anyway, apparently.
|
| I wonder if the answer to the disability question is something
| the AI uses when evaluating candidates, and if it has learned
| to just toss out anyone who says yes?
| hansvm wrote:
| The AI learns proxy signals. Name, work experience, skills
| (e.g., an emphasis on A11Y) ... all have some predictive power
| for gender, for some sorts of disabilities, ....
|
| You can fix the problem by going nuclear and omitting any sort
| of data that could serve as a proxy for the discriminatory
| signals, but it's possible to explicitly feed the
| discriminatory signals into the model and enforce that no
| combination of other data amounting to knowledge about them can
| influence the model's predictions.
|
| There was a great paper floating around for a bit about how you
| could actually manage that as a data augmentation step for
| broad classes of models (constructing a new data set which
| removed implicit biases assuming certain mild constraints on
| the model being trained on it). I'm having a bit of trouble
| finding the original while on mobile, but they described the
| problem as equivalent to "database reconstruction" in case that
| helps narrow down your search.
| zw123456 wrote:
| Oh, thank you, this was the question floating in my head as
| well, this explains it perfectly.
| treeman79 wrote:
| Because you can't always hide it. Nothing like giving a
| presentation on a whiteboard when all of a sudden your writing
| turns to gibberish.
|
| People have a limited tolerance. Then they start telling you to
| take care of yourself and strongly encouraging you to leave.
|
| It's why I switched to remote work before the pandemic. If a
| bad episode starts up I can cover for it much more easily.
| charcircuit wrote:
| Sorry, but that has nothing to do with AI resume review.
| alicesreflexion wrote:
| This doesn't matter until someone tries suing them for it, right?
|
| And as I understand it, you don't really have a case without
| evidence that the hiring algorithm is discriminating against
| people with disabilities.
|
| How would an individual even begin to gather that evidence?
| mkr-hn wrote:
| This is what that demographic survey at the end of job
| applications is for. It can reveal changes in hiring trends,
| especially in the demographics of who doesn't get hired. I
| don't know how well it works in practice.
| hallway_monitor wrote:
| I am a person, not a statistic. I always decline to answer
| these surveys; I encourage others to do the same.
| mkr-hn wrote:
| Those are for persuading people who _do_ see you as a
| statistic. You can unilaterally disarm if you like, but
| they 're going to keep discriminating until they see data
| that proves they're discriminating. Far too few people are
| persuaded by other means.
| ccooffee wrote:
| I also do this. But given the context of this post ("AI"
| models filtering resumes prior to ever getting in front of
| a human), maybe "decline to answer" comes with a hidden
| negative score adjustment that can't be (legally)
| challenged.
|
| I think the Americans with Disabilities Act (ADA) requires
| notification. (i.e. I need to talk to HR/boss/whoever about
| any limitations and reasonable accommodations.) If I am
| correct, not-answering the question "Do you require
| accommodations according to the ADA? []yes []no []prefer
| not to answer" can legally come with a penalty, and the
| linked DoJ reasoning wouldn't stop it.
| frumper wrote:
| "Employers should have a process in place to provide
| reasonable accommodations when using algorithmic
| decision-making tools;"
|
| "Without proper safeguards, workers with disabilities may
| be "screened out" from consideration in a job or
| promotion even if they can do the job with or without a
| reasonable accommodation; and"
|
| "If the use of AI or algorithms results in applicants or
| employees having to provide information about
| disabilities or medical conditions, it may result in
| prohibited disability-related inquiries or medical
| exams."
|
| This makes it sound like the employer needs to ensure
| their AI is allowing for reasonable accommodations. If an
| AI can assume reasonable accommodations then what benefit
| would they ever have to assume not supplying the
| reasonable accommodations that they are legally required
| to?
| skrbjc wrote:
| I'm trying to but my employer has said they will use
| "observer-identified" info to fill it in for me. I find it
| ridiculous that I can't object to having someone guess my
| race and report that to the government.
| mkr-hn wrote:
| That sounds broken. It's supposed to be voluntary.
|
| PDF: https://www.eeoc.gov/sites/default/files/migrated_fi
| les/fede...
|
| >> _" Completion of this form is voluntary. No individual
| personnel selections are made based on this information.
| There will be no impact on your application if you choose
| not to answer any of these questions"_
|
| Your employer shouldn't even be able to know whether or
| not you filled it out.
| skrbjc wrote:
| My experience is the reporting on current employees,
| which I guess is not voluntary. It's not very clear
| though:
|
| "Self-identification is the preferred method of
| identifying race/ethnicity information necessary for the
| EEO-1 Component 1 Report. Employers are required to
| attempt to allow employees to use self-identification to
| complete the EEO-1 Component 1 Report. However, if
| employees decline to self-identify their race/ethnicity,
| employment records or observer identification may be
| used. Where records are maintained, it is recommended
| that they be kept separately from the employee's basic
| personnel file or other records available to those
| responsible for personnel decisions."
|
| From: https://eeocdata.org/pdfs/201%20How%20to%20get%20Re
| ady%20to%...
| [deleted]
| marian_ivanco wrote:
| I am not sure, but if I remember correctly employer must prove
| they are not discriminating. And just because they are using AI
| they are not immune to litigation.
| rascul wrote:
| > I am not sure, but if I remember correctly employer must
| prove they are not discriminating.
|
| That seems backwards, at least in the US.
| danarmak wrote:
| How can the employer prove a negative?
|
| At most I imagine the plaintiff is allowed to do discovery,
| and then has to prove positive discrimination based on that.
| HWR_14 wrote:
| If it's a civil case, it's just the preponderance of the
| evidence. The jury just has to decide who they think is
| more likely to be correct.
| vajrabum wrote:
| If you read the document again (?) maybe you'll see it's
| not about proving a negative. Instead, it's a standard of
| due care. Did you check whether using some particular tool
| illegally discriminates and document that consideration?
| From the document itself:
|
| "Clarifies that, when designing or choosing technological
| tools, employers must consider how their tools could impact
| different disabilities;
|
| Explains employers' obligations under the ADA when using
| algorithmic decision-making tools, including when an
| employer must provide a reasonable accommodation;"
| radu_floricica wrote:
| dsr_ wrote:
| The process of gathering evidence after the suit has started is
| called discovery.
|
| There are three major kinds of evidence that would be useful
| here. Most useful but least likely: email inside the company in
| which someone says "make sure that this doesn't select too many
| people with disabilities" or "it's fine that the system isn't
| selecting people with disabilities, carry on".
|
| Useful and very likely: prima facie evidence that the software
| doesn't make necessary reasonable accomodations - a video
| captcha without an audio alternative, things like that.
|
| Fairly useful and of moderate likelihood: statistical evidence
| that whatever the company said or did, it has the effect of
| unfairly rejecting applicants with disabilities.
| pfdietz wrote:
| And one could go a step further: run the software itself and
| show that it discriminates. One doesn't just have to look at
| past performance of the software; it can be fed inputs
| tailored to bring out discriminatory performance. In this way
| software is more dangerous to the defendant than manual
| hiring practices; you can't do the same thing to an employee
| making hiring decisions.
| twofornone wrote:
| How would you make sure that the supplied version has the
| same weights as the production version? And wouldn't the
| weights and architecture be refined over time anyway?
| emiliobumachar wrote:
| Perjury laws. Once a judge has commanded you to give the
| same AI, you either give the same AI, or truthfully
| explain that you can't. Any deviation from that and
| everyone complicit is risking jail time, not just money.
|
| "this is the June 2020 version, this is the current
| version, we have no back ups in between" is acceptable if
| true. Destroying or omitting an existing version is not.
| bluGill wrote:
| Not that not having backups is something that you can sue
| the company for as an investor. If you say we have the
| June 2020 version, but not the july one you asked for you
| are fine, (it is reasonable to have save daily backups
| for a month, monthly backups for a year, and then yearly
| backups). Though even then I might be able to sue you for
| not having version control of the code.
| emiliobumachar wrote:
| True, but if you _really_ never had it, that 's money,
| not jail time.
| shadowgovt wrote:
| If a non-hired employee brings a criminal action, this
| may matter.
|
| For a civil action, the burden of proof is "preponderance
| of evidence," which is a much lower standard than "beyond
| a reasonable doubt." "Maybe the weights are different
| now" is a reasonable doubt, but in a civil case the
| plaintiff could respond "Can the defendant prove the
| weights are different? For that matter, can the defendant
| even explain to this court _how_ this machine works? How
| can the _defendant_ know this machine doesn 't just dress
| up discrimination with numbers?" And then it's a bad day
| for the defendant to the tune of a pile of money if they
| don't understand the machine they use.
| dogleash wrote:
| > How would you make sure that the supplied version has
| the same weights as the production version?
|
| You just run the same software (with the same state
| database, if applicable).
|
| Oh wait, I forgot, nobody knows or cares what software
| they're running. As long as the website is pretty and we
| can outsource the sysop burden, well then, who needs
| representative testing or the ability to audit?
| scollet wrote:
| Don't most production NN or DLN optimize to a maximum?
|
| Seems like the behavior becomes predictable and then you
| have to retrain if you see unoptimal results.
| jejones3141 wrote:
| These days, disparate impact is taken as evidence of
| discrimination, so it's easy to find "discrimination".
| mkr-hn wrote:
| What's the difference? Discrimination is an effect more than
| an intent. Most people are decent and well-intentioned and
| don't mean to discriminate, but it still happens. If there's
| a disparate impact, what do you imagine causes that if not
| discrimination? Remembering that we all have implicit bias
| and it doesn't make you a mustache-twirling villain.
| twofornone wrote:
| >If there's a disparate impact, what do you imagine causes
| that if not discrimination?
|
| 20+ years of environmental differences, especially culture?
| The disabilities themselves? Genes? Nothing about human
| nature suggests that all demographics are equally competent
| in all fields, regardless of whether you group people by
| race, gender, political preferences, geography, religion,
| etc. To believe otherwise is fundamentally unscientific,
| though it's socially unacceptable to acknowledge this
| truth.
|
| >Remembering that we all have implicit bias
|
| This doesn't tell you anything about the _direction_ of
| this bias, but the zeitgeist is such that it is nearly
| always assumed to go in one direction, and that 's deeply
| problematic. It's an overcorrection that looks an awful lot
| like institutional discrimination.
|
| >Remembering that we all have implicit bias and it doesn't
| make you a mustache-twirling villain.
|
| Except pushing back against unilateral accusations of bias
| if you belong to one, and only one, specific demographic,
| you effectively are treated like a mustache-twirling
| villain. No one is openly complaining about "too much
| diversity" and keeping their job at the moment. That's
| bias.
| etchalon wrote:
| There is no scientific literature which confirms that any
| specific demographic quality determine's an individuals
| capability at any job or task.
|
| What does exist is, at best, shows mild correlation over
| large populations, but nothing binary or deterministic at
| an individual level.
|
| To whit, even if your demographic group, on average, is
| slightly more or less successful in a specific metric,
| there is no scientific basis for individualized
| discrimination.
|
| It's "not socially unacceptable to acknowledge this
| truth", it's socially unacceptable to pretend
| discrimination is justified.
| twofornone wrote:
| >There is no scientific literature which confirms that
| any specific demographic quality determine's an
| individuals capability at any job or task
|
| There absolutely is a mountain of research which
| unambiguously implies that different demographics are
| better or worse suited for certain industries. A trivial
| example would be average female vs male performance in
| physically demanding roles.
|
| Now what is indeed missing is the research which takes
| the mountain of data and actually dares to draw these
| conclusions. Because the subject has been taboo for some
| 30-60 years.
|
| >To whit, even if your demographic group, on average, is
| slightly more or less successful in a specific metric,
| there is no scientific basis for individualized
| discrimination
|
| We are not discussing individual discrimination, I am
| explaining to you that statistically significant
| differences in demographic representation are extremely
| weak evidence for discrimination. Or are you trying to
| suggest that the NFL, NBA, etc are discriminating against
| non-blacks?
|
| >It's "not socially unacceptable to acknowledge this
| truth", it's socially unacceptable to pretend
| discrimination is justified
|
| See above, and I'm not sure if you're being dishonest by
| insinuating that I'm trying to justify discrimination or
| if you genuinely missed my point. Because that's how
| deeply rooted this completely unscientific blank slate
| bias is in western society.
|
| Genes and culture influence behavior, choices, and
| outcomes. Pretending otherwise and forcing corrective
| discrimination for your pet minority is anti-meritocratic
| and is damaging our institutions. Evidenced by the
| insistence by politicized scientists that these
| differences are minor.
|
| A single standard deviation difference in mean IQ between
| two demographics would neatly and obviously explain "lack
| of representation" among high paying white collar jobs; I
| just can't write a paper about it if I'm a professional
| researcher or I'll get the James Watson treatment for
| effectively stating that 2+2=4. This isn't science, our
| institutions have been thoroughly corrupted by such
| ideological dogma.
| etchalon wrote:
| Please link any study which shows a deterministic
| property and not broad averages.
| twofornone wrote:
| Broad averages of what? Difference in muscle
| characteristics and bone structure between males and
| females? Multiple consistent studies showing wide
| variance in average IQ among various demographics? The
| strong correlation between IQ and all manner of life
| outcomes, including technical achievements?
|
| Or are you asking me to find a study which shows which
| specific cultural differences make large swaths of people
| more likely to, say, pursue sports and music versus
| academic achievement? Or invest in their children?
|
| Again, the evidence is ubiquitous, overwhelming, and
| unambiguous. Synthesizing it into a paper would get a
| researcher fired in the current climate, if they could
| even find funding or a willing publisher; not because it
| would be factually incorrect, but because the politicized
| academic culture would find a title like "The Influence
| of Ghetto Black Cultural Norms on Professional
| Achievement" unpalatable if the paper didn't bend over
| backwards to blame "socioeconomic factors". Which is
| ironic because culture is the socio in socioeconomics,
| yet I would actually challenge YOU to find a single
| modern paper which examines negative cultural adaptations
| in any nonwhite first world group.
|
| Further, my argument has been dishonestly framed (as is
| typical) as a false dichotomy, I'm not arguing that
| discrimination doesn't exist, but the opposition is
| viciously _insisting_ , that all differences among groups
| are too minor to make a difference in a meritocracy, and
| anyone who questions otherwise is a bigot.
| etchalon wrote:
| I did not call you a bigot. I never made any assumptions
| or aspersions as to your personal beliefs.
|
| I am pointing that, despite your claim that your
| viewpoint is rooted in science, you have no scientific
| basis for your belief beyond your own synthesis of facts
| which you consider "ubiquitous, overwhelming, and
| unambiguous".
|
| You have a belief unsupported by scientific literature.
| If you want to claim that the reason it is unsupported is
| because of a vast cultural conspiracy against the type of
| research which would prove your point, you're free to do
| so.
| twofornone wrote:
| >You have a belief unsupported by scientific literature
|
| I have repeatedly explained to you that the belief is
| indeed supported by a wealth of indirect scientific
| literature.
|
| >You have a belief unsupported by scientific literature.
| If you want to claim that the reason it is unsupported is
| because of a vast cultural conspiracy against the type of
| research which would prove your point, you're free to do
| so.
|
| Calling it a conspiracy theory is a dishonest deflection.
| It is not a conspiracy, it is a deeply rooted
| institutional bias. But I can play this game too: can you
| show me research which rigorously proves that genes and
| culture have negligible influence on social outcomes?
| Surely if this is such settled science, it will be easy
| to justify, right?
|
| Except I bet you won't find any papers examining the
| genetic and/or cultural influences on professional
| success in various industries. It's like selective
| reporting, lying through omission with selective research
| instead.
|
| But you will easily find a wealth of unfalsifiable and
| irreproducible grievance studies papers which completely
| sidestep genes and culture while dredging for their
| predetermined conclusions regarding the existence of
| discrimination. And because the socioeconomic factors of
| genes and culture are a forbidden topic, you end up with
| the preposterous implication that all discrepancies in
| representation must be the result of discrimination, as
| in the post that spawned this thread.
| [deleted]
| tomjen3 wrote:
| Quite a part from the fact that implicit bias doesn't
| replicate, if you have 80% male developers it is not
| because you are discriminating against women, it is because
| the pool you hire from is mostly men.
|
| If you refuge to hire a woman because she is a woman, you
| are discriminating. Fortunately that is historically rare
| today.
| MontyCarloHall wrote:
| >If there's a disparate impact, what do you imagine causes
| that if not discrimination?
|
| Disparate impact is often caused by discrimination upstream
| in the pipeline, not discrimination on the part of the
| hiring manager. Suppose that due to systematic
| discrimination, demographic X is much more likely than
| demographic Y to grow up malnourished in a house filled
| with lead paint. The corresponding cognitive decline
| amongst X people would mean they are less likely than Y
| people to succeed in (or even attend) elementary school,
| high school, college, and thus the workplace.
|
| A far smaller fraction of X people will therefore
| ultimately be qualified for a job than Y people. This isn't
| due to any discrimination on the part of the hiring
| manager.
| shadowgovt wrote:
| The reason these two collide so often in American law is
| that the two historically overlap.
|
| When a generation of Americans force all the people of
| one race to live in "the bad part of town" and refuse to
| do business with them in any other context, that's
| obviously discrimination. If a generation later, a bank
| looks at its numbers and decides borrowers from a
| particular zip code are higher risk (because historically
| their businesses were hit with periodic boycotts by the
| people who penned them in there, or big-money business
| simply refused to trade with them because they were the
| wrong skin color), draws a big red circle around their
| neighborhood on a map, and writes "Add 2 points to the
| cost" on that map... Discrimination or disparate impact?
| Those borrowers really _are_ riskier according to the
| bank 's numbers. But red-lining is illegal, and if 80% of
| that zip code is also Hispanic... Uh oh. Now the bank has
| to prove they don't just refuse Hispanic business.
|
| And the problem with relying on ML to make these
| decisions is that ML is a correlation engine, not a human
| being with an understanding of nuance and historical
| context. If it finds that correlation organically (but
| lacks the context that, for example, maybe people in that
| neighborhood repay loans less often because their
| businesses fold because the other races in the
| neighborhood boycott those businesses for being "not our
| kind of people") and starts implementing de-facto red-
| lining, courts aren't going to be sympathetic to the
| argument "But the machine told us to discriminate!"
| annoyingnoob wrote:
| I'll bet the same AI used in hiring decisions could also be
| biased against older workers.
| timcavel wrote:
___________________________________________________________________
(page generated 2022-05-13 23:01 UTC)