[HN Gopher] Minimum wage 'ghosts' keep AI arms race from becomin...
___________________________________________________________________
Minimum wage 'ghosts' keep AI arms race from becoming a nightmare
Author : miles
Score : 56 points
Date : 2023-02-18 18:09 UTC (4 hours ago)
(HTM) web link (www.latimes.com)
(TXT) w3m dump (www.latimes.com)
| catiopatio wrote:
| Why is this work worth more money than Google is currently
| paying?
|
| This is not a job that requires rare or difficult to acquire
| skills, nor is there a shortage of people willing to do the work.
|
| The main subject -- who appears to work from home and set his own
| hours -- complains that his hourly wage is $2 less than his
| daughter working in fast food.
|
| I'd argue she has a much more demanding job.
| febeling wrote:
| Interesting to learn who's the single source of truth about
| propaganda classification. This uneducated ,,rater" on minimal
| wage, training a search engine is probably maximally dependent on
| the job. What could go wrong?
| [deleted]
| hourago wrote:
| > "We make some of the lowest wages in the U.S.," Stackhouse, who
| has been a rater for nearly a decade, says. "I personally make $3
| less per hour than my daughter working in fast food."
|
| > Stackhouse has a serious heart condition requiring medical
| management, but his employer, Appen -- whose sole client is
| Google -- caps his hours at 26 per week, keeping him part-time,
| and ineligible for benefits.
|
| What a cruel system.
| MonkeyMalarky wrote:
| Unfortunately the cruel system will continue to persist as long
| as people refuse to acknowledge it's existence.
| catach wrote:
| After acknowledgement comes the rationalization layers.
| Necessary but not sufficient.
| krapp wrote:
| Not only do people acknowledge its existence, they believe
| its cruelty is just and fair.
| [deleted]
| tbrownaw wrote:
| > _caps his hours at 26 per week, keeping him part-time, and
| ineligible for benefits._
|
| Benefits should not be yes/no at some hours-per-week cutoff.
|
| Or really, they shouldn't be a thing. Fix any tax bs that makes
| it cheaper for employers to pay for them rather than employees
| and remove any rules that say employees have to provide them.
| ... err, I guess time off might count as a benefit, so make
| sure any rules scale linearly all the way to zero.
|
| How much an employee costs shouldn't have sudden jumps at
| particular numbers of hours. Employees shouldn't depend on
| their employees for anything more than a (fungible, because
| it's just money) paycheck.
| kevviiinn wrote:
| Almost like there should be a system of healthcare not tied
| to employment or income
| crooked-v wrote:
| If we can ever get Medicare For All in place in the US, the
| biggest benefits anchors will evaporate all at once.
| throw009 wrote:
| If you don't like a job find one that pays better. I was
| working something similar at university and rather enjoyed
| being paid to read the news and summarise them. I didn't want
| it to be a career and neither did the company I worked for.
| flangola7 wrote:
| Ah yes, the mythical land of jobbies. Unfortunately we need
| solutions that work in real life and not an Ayn Rand fantasy.
| DoctorOW wrote:
| ...and what happens when you can't find one? The narrative
| that there are all these jobs being turned down is false. The
| jobs that would hire someone like Stackhouse are all pretty
| much the same as the one he has. This wouldn't matter so much
| if it wasn't matter of a life and death but they won't even
| let someone survive anymore.
| throw009 wrote:
| Why we all go to your house and you start paying us to do
| jobs that you don't value. This is the new safety net and
| it's an improvement in expecting a corporation to do it.
| charcircuit wrote:
| You make yourself more attractive to employers by acquiring
| or improving your skills or learning to market yourself
| better.
| surement wrote:
| > and what happens when you can't find one?
|
| sounds like what you're saying is that without this
| employer, this employee would have a worse or no job; if
| you start from that and then give them this job then it
| sounds pretty positive
|
| > they won't even let someone survive anymore.
|
| who's they?
| throwbadubadu wrote:
| You have been downvoted, but I also wonder a bit what the
| real story (or debate) is here.
|
| Some jobs will pay the lowest wage... that it is these jobs?
| I actually would rate and expect these jobs pay lower than
| the typical lowest pay jobs which require physical labor like
| the typical low pay jobs in fast food, or building cleaning?
|
| Or the specific amount given current circumstances? True.. on
| the other hand still well above the minimum wage by law in
| most European countries.. maybe not comparable well, but
| still not much out of range.
|
| That rich companies pay low wages? Yeah, absurd and unfair,
| but that unfortunately never hold a rich company back where
| it could just pay that (I'd like different regulation here).
|
| So the problem is really independent of these specific jobs
| or companies? And the answer to this is better regulation and
| worker protection laws, that many frown upon?
| lovich wrote:
| When companies are paying so low that the workers can't
| support themselves(which US minimum wage does not pay
| enough for) then the companies are not working for
| societies benefit. They are getting subsidized by whatever
| welfare their employees are getting, such as the famous
| example of Walmart teaching their employees to sign up for
| food stamps. They are also refusing to engage in the type
| of innovation that capitalism is supposed to generate in
| the face of scarce goods, because they aren't paying for
| this externality.
|
| Much in the same way that you can't pay an industry
| supplying you with critical parts forever because then the
| parts will eventually stop being built, companies shouldn't
| be allowed to pay for less than the necessary amount to
| support the workers living. If this makes the job they are
| doing not valuable enough then the companies shouldn't be
| engaged in the work that can't be profitable without
| subsidies, ask the government for specific earmarked
| subsidizes if it generates a positive externality, or
| actually do some innovation and figure out a way to make
| the work profitable
| AndrewKemendo wrote:
| This is because the data is fundamentally and inextricably
| embedded with anti-social sentiments[1]: language embedded with
| sentiments reflecting competition and fear rather than
| cooperation and mutual aid sentiments.
|
| All that nightmare data (which we generate, look I'm doing it
| now), is a reflection of the joys and traumas of people writing
| it, and if the majority of the NLP chat data is anti-social then
| there is no possible other outcome.
|
| You cannot fix this with more "filtering" as the data is embedded
| in almost all written text. You have to change the sentiment.
|
| [1]https://snap.stanford.edu/class/cs224w-2016/projects/cs224w-..
| .
| startupsfail wrote:
| I once saw a business co-Founder joining a startup from Google,
| around a decade back. Leaving his 500k Googler salary to be a
| Founder at a hot startup. He was from Indian origins and an Ivy
| League MBA.
|
| His first contribution, besides joining with his credentials was
| to cut down a few cents per hour from what we've been paying to a
| sweatshop some place in India. That was annotating images for us.
| From somewhat strange biases in the annotated data, it was likely
| that this sweatshop was employing children.
|
| Incidentally, Google had acquihired that startup.
| Waterluvian wrote:
| "Ivy League MBA."
|
| Do other countries label a caste of "prestigious" universities
| like the U.S. does?
| pvg wrote:
| 'Oxbridge'. Most countries just aren't as big and/or don't
| have as many prestige branded universities. Many have a one
| or two that serve similar signaling purposes.
| Waterluvian wrote:
| Thanks. I had never heard of this. That gives me plenty to
| google.
| robertlagrant wrote:
| It's also just there's no convenient portmanteau. Prarvale?
| morelisp wrote:
| The USA elite wish they had something as conveniently
| entrenched as Oxbridge.
| 6LLvveMx2koXfwn wrote:
| Yes, we do too in the UK, the Russell Group [1]
|
| 1. https://en.wikipedia.org/wiki/Russell_Group
| DoctorNick wrote:
| "Three years in jail is a good corrective for three years at
| Harvard" - Alger Hiss
| [deleted]
| IIAOPSW wrote:
| [flagged]
| CatWChainsaw wrote:
| Am I going to get flagged for pointing out what sort of
| horrible images need to be "annotated"? Actual terrorist
| material - beheadings and the like. Actual CSAM. Street
| violence like Tyre Nichols. Poring over images like that for
| hours at a time teaching a machine about all the horrors of
| what people are capable of is supposed to be "better than
| school"? What a twisted idea.
| Entinel wrote:
| Explain how this is "arguably better than school"
| lovich wrote:
| Well, if you already value these people at pennys per hour,
| he probably doesn't see the value in educating them.
| Frummy wrote:
| I remember working for Appen as a teenager. I saw it as a video
| game and became really really fast at labeling and quality
| assurance. Since it's linear work without hard thinking it became
| this sort of muscle memory thing. I remember some different
| projects, voice assistant for some car company that gave
| directions and took commands to change settings in the car,
| neural net for dog pictures, helpfulness and accuracy of search
| results, map stuff ,a big excel that I translated last minute to
| swedish and got like a bonus for doing late at night for them,
| probably other stuff as well. Wasn't worth it for the time spent
| getting up to context to work on a task and really low pay but
| fun to infer company secrets from tasks that I worked on
| profstasiak wrote:
| what a dystopia. Was it ever predicted in any science fiction
| book? Maybe the economists are right - AI wont make us all
| unemployed. The only problem is, our only jobs will be such
| abhorent as these ones those poor people have, improving the next
| version of AI
| visarga wrote:
| It is hard for me to compute this. AI engineers know very well
| the value of good training data. They all get to label some,
| they get close and personal with it. But when it comes to job
| titles and perks, it's like the labelling people are inferior.
|
| I have worked as a ML engineer close with internal labelling
| teams and let me tell you, not everyone could do it. It is a
| hard job, and often requires deep thinking to solve. Many ML
| engineers themselves are actually bad at labelling.
|
| And the contribution of the labelling team can be equal sized
| or more important than that of the ML engineers. In normal
| conditions you need at least one labelling person / engineer,
| all projects need labelling, there's never a gap, labelling is
| never finished. Finding a good labeller is also hard - not
| everyone on the street could do it, contrary to public
| expectations.
| lifeisstillgood wrote:
| Like a really good tester ?
| layer8 wrote:
| How do you assess labelling quality? What I'm wondering is,
| how would the feedback loop work, in the longer term, between
| quality of AI output and paying/selecting labellers?
| BitJockey wrote:
| > How do you assess labelling quality? You have multiple
| people to label the same piece of data. This increases
| accuracy of labeling and help to spot bad labelers.
| layer8 wrote:
| It only spots labellers that are not in the middle of the
| curve of those who you employ. My point was, if factors
| like salary affect labelling quality, how do you steer
| that? You don't know whether if you'd pay double, you'd
| get much better fact-checkers, for example.
| kelseyfrog wrote:
| Interater and intrarater reliability are two general
| approaches. You could even regress reliability with
| salary and make a case that reliability would improve by
| a certain amount(within a prediction interval), but
| that's a business decision to act on, not a statistics
| problem.
| rvba wrote:
| The main protagonist of the novel "Nineteen Eighty-Four" by
| Orewell worked in the Ministry of Truth and his job was to
| rewrite historical records to conform to the state's ever-
| changing version of history. Basically he was a censor, who
| would remove inconvenient people or facts from newspapers or
| books. (I dont think Internet existed in the situation
| described in the book).
|
| 1984 was a big critic of communism, where facts dont mater and
| history can be rewritten. Are we really far from that?
|
| * Those poorly paid raters label things - and a lot of things
| depends on quality of their work. Do those companies even
| cross-check their work, when the idea is to get the lowest
| bidder? (on a side note, why FAANG companies dont understand
| the "garbage in, garbage out" concept? They can pay 300k usd to
| some programmer, yet they skimp on getting classifiers from USA
| or Europe who know the cultural context)
|
| * Then there are probably censors, who can censor things based
| on their own ideas. Probably censorship can happen at two
| levels: first by using labels, then by deciding what to show
| based on those labels.
|
| * Some websites have "fact checkers", who are necessary (with
| the amount of lies written in troll-farms and soon lies written
| by AI), but you cant really appeal about their process in any
| way - and those are some faceless people from third world
| countries.
|
| *Often you cannot ever contact with a real person, Google and
| Facebook are the worst here. Microsoft is a bit better, because
| if you pay them money for support, you can get human support.
|
| * Wikipedia is a constant war between vandals, but also actors
| that are much smarter (seem state sponsored even). When we
| speak about Wikipedia it is incredibly sad that it never
| introduced any systems to review the work of their admins (I
| saw a situation where some articles are "guarded" by 2-3 admins
| who share their own >version of facts< and nobody can do
| anything about that). And this could be solved by setting up
| panels of judges that judge edits that are anonymous. Or
| setting a panel of judges who judge actions of admins (yes,
| there is that "Arbitration Committee" that barely works). Of
| course then there is the question "who watches the watchmen",
| but still - Wikipedia Foundation has millions of dollars that
| they waste on various things, instead of spending it to improve
| their core product.
|
| On a side note, I always thought about starting such a data
| labeling business; I think a lot of work done by current
| labelers is still low quality. I saw some start-ups that were
| checking things (e.g. medical data) and they would pay peanuts
| to the people who labeled their data. Then they would get bad
| labels.. and build wrong conclusions on that with their
| classifiers.
| startupsfail wrote:
| There are big upsides as well. Ii the moment you've got ChatGPT
| interacting with millions of people and in general exhibiting
| better than median human reasoning, kindness, rationality,
| positivity, listening and understanding skills. These
| interactions to some degree are equivalent to having access to
| a good engaging teacher. Think Sezame Channel, only adult level
| one. The results might be very positive.
| kevviiinn wrote:
| A good engaging teacher... that gives you incorrect
| information?
| peyton wrote:
| I haven't had this problem in my own use. But I don't try
| to trip it up.
| miles wrote:
| There's no need to try; ChatGPT is often wrong right out
| of the gate. I asked which movie a quote was from as my
| first question, which ChatGPT identified precisely. The
| problem: it was precisely wrong. When I said I just
| searched the script and could find no mention of the
| quote, ChatGPT replied, "I apologize for the error in my
| previous response. Upon reviewing my sources, I couldn't
| find any reference to the line... I'm sorry for any
| confusion that my previous response may have caused."
| startupsfail wrote:
| I wonder if this is actually an alignment with the
| responses that an exhausted worker in a sweatshop could
| give. What you put in is what you get...
| kevviiinn wrote:
| Every time I ask one of these models any sort of semi
| detailed question about a science/bio topic it either
| gets some info blatantly wrong, misuses terms, or a
| combination of both
| startupsfail wrote:
| Certainly. But the observation that I was making is that
| it is likely above median. You are likely far above the
| median yourself and an expert in the particular area.
| kevviiinn wrote:
| I think the difference between chatgpt and a good teacher
| is that a good teacher knows when to say "I don't know"
| and not feed their student BS that makes no sense. That's
| where the real damage happens
| startupsfail wrote:
| Yes, it does behaves a bit like that exhausted outsourced
| call center that would give the customer an easy answer
| with a wish that a customer would not notice that a lazy
| answer was in fact given. Instead of giving the best
| insight possible with the clarity of limitations of that
| insight.
|
| This is maybe caused by precisely the nature and the
| origins of the training data.
| 1123581321 wrote:
| That seems to be what the Open AI administrators believe, but
| it's more likely that the monotonous rhetorical style of LMMs
| will go out of style or prompt a backlash by humans against
| polished, reasonable-sounding, slightly wordy conversation.
| dTal wrote:
| LMM's don't inherently have a monotonous style - that's
| simply a function of the prompt, which more often than not
| instructs them to _pretend to be an AI_.
| [deleted]
| mortenjorck wrote:
| I don't know if any serious authors have, but my finish-it-one-
| of-these-days novel is premised on a perpetually almost-there
| state of the art in AI that is constantly hand-held by an
| invisible underclass.
|
| Although I'm starting to think I ought to finish it soon just
| in case this current pace of advancement in LLMs doesn't stall
| out for once.
| kevviiinn wrote:
| Just have chatgpt write it for you
| AndrewKemendo wrote:
| "The march toward perfect alienation and exploitation will make
| any job unthinkable at any pay rate." [1]
|
| [1] https://kemendo.com/Myth.pdf
| sonofhans wrote:
| Not as literal a take as you might like, but Fritz Lang's
| _Metropolis_, of course -- an invisible, downtrodden underclass
| breaks their backs on the machines so that a mediocre elite can
| live in the clouds. That was nearly 100 years ago.
| huevosabio wrote:
| I never understand this type of hate-porn.
|
| These jobs are not horrible, not for humanity's historical
| standards. They can be done from home, without commute, without
| physical and demanding labor. A medieval peasant, a 1800s
| factory worker and probably many of the 1900s low skilled
| laborers would take this over the alternatives.
|
| Moreover, these people have every right and option to not do
| it, in fact right now the labor market is such that they have
| valuable alternatives!
|
| We keep thinking about how this or that job is not acceptable
| for _our personal standards_, but until we get Fully Automated
| Luxury Communism we have to be honest that progress looks this:
| slightly less shitty jobs in exchange for higher quality of
| life.
| AndrewKemendo wrote:
| They aren't less shitty they are shittier in different ways,
| but the key point is that they are progressively more
| alienated from the value they provide and the community
| structure.
| catiopatio wrote:
| The value they individually provide is minuscule; it only
| matters in aggregate, and when leveraged by others who
| bring a great deal more to the table.
|
| Anyone of average IQ and education could do this work, and
| there's no shortage of people willing to do it.
| hypothesis wrote:
| Using RTO to punish disabled folks for demanding better wages is
| just something that AI would come up with, right?
| wizzwizz4 wrote:
| If you consider existing AI computer systems, no: they're not
| powerful enough. If you mean non-human human-created
| intelligences, then our institutions could count: in which
| case, yes it is.
| neonate wrote:
| https://archive.ph/5IiPc
___________________________________________________________________
(page generated 2023-02-18 23:02 UTC)