[HN Gopher] Algorithmic bias bounty challenge
___________________________________________________________________
Algorithmic bias bounty challenge
Author : alexrustic
Score : 79 points
Date : 2021-07-30 17:10 UTC (5 hours ago)
(HTM) web link (blog.twitter.com)
(TXT) w3m dump (blog.twitter.com)
| thepangolino wrote:
| I would have given to way more than that to find out what trick
| did the 2-3 top anti-Trump account used to always show up on top
| of the comments of each of Trump's tweets.
| only_as_i_fall wrote:
| Does Twitter not order tweets based on number of likes?
| noxer wrote:
| No, that would be unbiased^^
| bequanna wrote:
| Wouldn't a better solution be to optimize the photo cropping for
| the specific viewer? Or, for whatever 'group' the viewer falls
| in?
| acituan wrote:
| Here's my qualitative submission; bounty is not necessary.
|
| Image cropping is not representative of the most important AI
| biases within Twitter, and making only that salient is the
| biggest meta-bias.
|
| Differential impact and harm analyses are required for engagement
| maximizing recommendations of the _tweets_.
|
| Also, no one cares about your cropping algorithms, open the data
| that trained the model if you dare. Without it you don't get to
| posture transparency or ethicality.
| pipthepixie wrote:
| > We want to take this work a step further by inviting and
| incentivizing the community to help identify potential harms of
| this algorithm beyond what we identified ourselves.
|
| What _more_ needs to be done here? Twitter got caught out with
| bias in their image cropping algorithm, and now they want people
| to further elaborate on how /why this is?
|
| > For this challenge, we are re-sharing our saliency model and
| the code used to generate a crop of an image given a predicted
| maximally salient point and asking participants to build their
| own assessment.
|
| Seems like a very exhaustive endeavour just to point out
| algorithmic bias and re-iterate what went wrong, but in more
| detail.
| SpicyLemonZest wrote:
| They want to develop a fuller understanding of what kinds of
| biases they need to be aware of. "Caught out with bias" isn't a
| good way to understand the situation - this isn't the kind of
| problem that can be solved by just debugging to find the biased
| functions and using unbiased ones instead.
| vineyardmike wrote:
| > Seems like a very exhaustive endeavour just to point out
| algorithmic bias and re-iterate what went wrong, but in more
| detail.
|
| They want everyone to help them find _additional_ bias, and
| other points it could go wrong, with the presumed purpose of
| them fixing it.
|
| They want to find bias they can solve and remove, not just get
| more people to tell them why the bias they found is wrong!
| slg wrote:
| I never really pay attention to these bounty challenges. Are
| those rewards reasonable? They seem incredibly low compared to
| the work involved. I have seen locally sponsored hackathons with
| higher total prizes.
| noxer wrote:
| It's PR nonsense. They dont want anyone not working for them to
| work for them.
| potatoman22 wrote:
| 3 grand don't seem like nonsense to me.
| daenz wrote:
| For a week's worth of work, for someone skilled enough to
| be capable of winning this contest, and there is no
| guarantee for winning? It is nonsense.
| codyb wrote:
| I figure it's kind of like a musician busking on the street
| corner.
|
| It's a no risk way to add a few constraints to the practicing
| you're probably already doing and if you're lucky maybe you'll
| earn a few bucks too.
| jb775 wrote:
| Seem insanely low considering it's really just a for-profit
| business asking the general public to figure out a very
| difficult problem they aren't able to figure out on their own.
| ryankupyn wrote:
| I think the challenge is that if the rewards were high, Twitter
| employees (with the advantage of inside information) might be
| tempted to "tip off" an outsider in exchange for a cut of the
| reward, rather than just reporting the issue internally.
|
| At the same time, there isn't much of an outside market for
| algorithmic bias info in the same way there for security
| vulnerabilities. Probably the biggest effect of this reward
| will be to pull some grad students who were going to study
| algorithmic bias anyways towards studying Twitter specifically
| - after all, there aren't any rewards for studying the
| algorithmic bias of other companies!
| version_five wrote:
| If they hired a contractor to work on this, the $3500 prize
| would get them 1-2 days work. At the same time, I can easily
| see someone interesting in this area investing time in it for
| the challenge. Doing a hackathon as a source of income is
| probably not too common.
| debrice wrote:
| I have an ethic issue with these kind of challenges and rewards.
| When a company spends millions of dollars, probably tens of
| millions on a project... $3,500 reward for, in a way,
| successfully contributing to that project feels off.
|
| Don't get me wrong, the subject and goals of the work are
| definitely good but not entirely philanthropic. I feed like
| helping to find and fix broken parts of a billion dollars for
| profit industry should generate significantly more wealth than
| few grands
| tayo42 wrote:
| 3500 is a weeks work of pay if you got 180k a year. about a
| weeks worth of a paid engineers time. How much time do spend on
| this?
| daenz wrote:
| Now apply expected value to that figure.
| debrice wrote:
| This being a challenge, if you gather only 20 engineers to
| participate in it, we're talking about a pretty sweet deal
| that can open many avenues of research for the organizer of
| the event
| bitbckt wrote:
| This reminds of the time at Twitter when all Japanese users were
| (accidentally) inferred to be female after a model changed.
| stolenmerch wrote:
| "Although we find no evidence the saliency model explicitly
| encodes male gaze, in cases where the model crops out a woman's
| head in favor of their body due to a jersey or lettering, the
| chosen crop still runs the risk of representational harm for
| women. In these instances, it is important to remember that users
| are unaware of the details of the saliency model and the cropping
| algorithm. Regardless of the underlying mechanism, when an image
| cropped to a woman's body area is viewed on social media, due to
| the historical hyper-sexualization and objectification of women's
| bodies, the image runs this risk of being instilled with the
| notion of male gaze."
|
| "Point multipliers are applied for those harms that particularly
| affect marginalized communities since Twitter's goal is to
| responsibly and equitably serve the public conversation."
|
| I'm sure Twitter is just responding to the controversy from Fall
| of 2020 and doing due diligence to address the problem. However,
| how do you award a bounty for addressing "risk of
| representational harm" due to historical biases not inherent in
| the model? Genuine question here and one I'm always curious
| about. Seems difficult if not impossible.
| basseq wrote:
| For anyone else wondering about "the controversy from Fall
| 2020":
| https://www.theguardian.com/technology/2020/sep/21/twitter-a...
| [deleted]
| IfOnlyYouKnew wrote:
| I believe the principle is fairly easy to understand: imagine
| you're missing two fingers. Because kids are cruel, your middle
| school experience involved lots of shaming and name-calling,
| and that really got to you at the time.
|
| Everything is fine now, because nobody actually cares. But then
| along comes this algorithm and, for some reason, it crops your
| profile picture to be just that hand of yours. Of course one or
| two "friends" from middle school are following you on twitter,
| and start reminiscing about the times they called you three-
| finger-bob.
|
| Is it conceivable/realistic/justified that this would, at least
| for a while, hurt?
|
| Meanwhile, for someone who didn't go through that experience,
| the algorithm doing the very same thing is just... a curious
| bug?
|
| As to how they are going to score this: it cannot be scored
| only quantitively, as they specifically say in the description.
| They are going to read your hypothesis-something like the
| above, but maybe a little more serious-and score it, probably
| along a a few categories and by a few different people etc.
| stolenmerch wrote:
| A better comparison would be if there was some salient
| feature surrounding my hands such as me holding an object.
| The object would be the equivalent to the jersey number. It's
| clearly not cropping on the thing that makes me feel bad but
| it did anyway. To avoid this, it has to be trained to take
| into account all possible human reactions. Basically, it has
| to know that these pixel values activate some social
| controversy and it's best to avoid. Not saying this isn't
| possible, but it seems absolutely fraught with peril and
| potentially more harmful than the original saliency,
| especially for social constructs with no clear consensus.
| IfOnlyYouKnew wrote:
| Yes, I didn't mean to suggest any specific reason for the
| algorithm's output, because it doesn't matter.
|
| This issue isn't about anyone's "guilt", least of all the
| algorithm's. It's about harm. Harm is to be avoided, even
| if it is the result of a completely benign process with no
| ill intentions.
|
| And you aren't going to be able to explain away some
| decision that causes harm by explaining the algorithm. Even
| knowing everything about the algorithm, I would prefer it
| doesn't focus on whatever my weak spot is. And even knowing
| how GPT-3 works, I still tend to anthromorphize it.
|
| To some approximation, exactly nobody in the real world is
| going to give the company the benefit of the doubt and
| study its algorithm, nor should they be expected to.
|
| It's like that escaped lion that keeps eating the
| schoolchildren: it's what lions do, an expression of its
| basic lioness! It's not evil, guilty, or even culpable: it
| just can't help itself, but to help itself to an early
| dinner.
|
| And yet, we are going to shoot the lion. Not as punishment,
| but as a simple, mechanistic, prevention of harm to people,
| who, in our system of values, rank higher.
|
| Algorithms rank far below lions. An algorithm that causes
| harm, no matter how or why, goes the way of php (taken out
| back and shot, unless it's run by Facebook). And anything
| that happens is considered to be caused by the algorithm,
| even if some humans happen to provoke it by somehow being
| different than other humans, or our expectation of
| rationality. Because we cannot change humans, and because
| nobody should be expected to change to accommodate
| technology, especially if they were never asked if they
| want that technology.
| naasking wrote:
| > Is it conceivable/realistic/justified that this would, at
| least for a while, hurt?
|
| The people reminiscing about hurtful teasing would hurt. The
| algorithm that did the cropping would not. Algorithms don't
| have intent and intentions matter.
| IfOnlyYouKnew wrote:
| A person was hurt. There are multiple steps in the chain of
| causality, but only one you can change. So you do.
| bastawhiz wrote:
| > However, how do you award a bounty for addressing "risk of
| representational harm" due to historical biases not inherent in
| the model?
|
| I think it's pretty straightforward. Most sensible, considerate
| humans would avoid cropping an image of a woman to her boobs
| simply because it's insensitive to do so. Just because the
| machine is trained to highlight text or other visual features
| doesn't preclude it from ALSO understanding human concepts
| which are difficult to express to the computer in a
| straightforward way.
|
| There are plenty of ways the model can be improved (e.g., not
| preferring white faces over Black faces), and they're certainly
| difficult. If they weren't difficult, Twitter would have simply
| fixed the problem and there wouldn't be a competition.
| Arguably, though, if the job can reasonably accomplished
| manually by a human then a computer should be able to do a
| similarly effective job. Figuring out how is why it's a
| challenge. And if we can't figure out how, that's another
| interesting point in the ongoing conversation about ML.
| stolenmerch wrote:
| Training a ML model to understand how "most sensible,
| considerate humans" would act is anything but
| straightforward. I'm not even sure most people would even
| consider cropping an athlete at the jersey number regardless
| of race or gender - it just doesn't make sense yet the
| machine seemed to do it at the same rate for male vs. female.
| Retrofitting discrimination onto this result only once you
| learn the label of the data isn't particularly useful. We
| want to know how to make good non-harmful predictions in the
| future.
| bastawhiz wrote:
| > Training a ML model to understand how "most sensible,
| considerate humans" would act is anything but
| straightforward.
|
| This is exactly why it's a challenge! The point is that the
| goal should be to do it the way a human would find
| satisfactory and not the way that's easy.
|
| Even if the machine wasn't trained to be biased, the
| machine should still produce results which people see as
| good. We didn't invent machines to do jobs just so people
| could say "but that's a bad result" and reply with "yes but
| the machine wouldn't know that". We should strive for
| better.
| naasking wrote:
| > Just because the machine is trained to highlight text or
| other visual features doesn't preclude it from ALSO
| understanding human concepts which are difficult to express
| to the computer in a straightforward way.
|
| I honestly don't know how you reached this conclusion. You're
| skirting the line of contradicting yourself.
|
| > Arguably, though, if the job can reasonably accomplished
| manually by a human then a computer should be able to do a
| similarly effective job.
|
| I also don't see how this could possibly be true. Certainly
| the sphere of tasks that computers competently handle
| compared to humans is growing, but it's nowhere near "any job
| a human can reasonably do".
| bastawhiz wrote:
| > I also don't see how this could possibly be true.
| Certainly the sphere of tasks that computers competently
| handle compared to humans is growing, but it's nowhere near
| "any job a human can reasonably do".
|
| If we admit it's a job that a computer cannot reasonably
| do, then why do we have a computer doing it in the first
| place? We shouldn't accept the status quo with "well it's
| okay-enough" if it has limitations that are significant
| (and frequent!) enough to cause a large controversy.
|
| In fact, Twitter's response was _to remove the algorithm
| from production_. The whole point of this challenge is to
| find out if there's a way to automate this task well. It
| doesn't have to be perfect, it has to be good enough that
| the times where it's wrong are not blatant and
| reproducible, like when this was initially made apparent:
|
| https://www.theguardian.com/technology/2020/sep/21/twitter-
| a...
| acituan wrote:
| > Most sensible, considerate humans would avoid cropping an
| image of a woman to her boobs simply because it's insensitive
| to do so
|
| The manners of sexualization of the human form, even nudity,
| is not a human universal, not even a western universal, eg
| Nordics and Germany. Even though through omission, this move
| is still overemphasizing the sexuality of breasts, which is
| basically pushing American cultural sensitivities upon the
| world.
| bastawhiz wrote:
| You're very much missing the point. The machine obviously
| isn't intentionally sexualizing anyone, but it's producing
| a bad result, and not only is it bad, it can be perceived
| as sexualization (regardless of whether there's bias or
| not). The machine lacks understanding, producing a bad
| result, and the bad result is Extra Bad for some people.
|
| Let's say I started a service and designed a machine to
| produce nice textile patterns for my customers based on
| their perceived preferences. If the machine started
| producing some ugly textiles with patterns that could be
| perceived as swastikas, the answer is not to say "well
| there are many cultures where the swastika is not a harmful
| symbol and we never trained the machine on nazi data". The
| answer is to look at why the machine went in that direction
| in the first place and change it to not make ugly patterns,
| and maybe teach it "there are some people who don't like
| swastikas, maybe avoid making those". It's a machine built
| to serve humans, and if it's not serving the humans in way
| that the humans say is good, it should be changed. There's
| no business loss to having a no-swastika policy, just as
| there's no business loss that says "don't zoom in on boobs
| for photos where the boobs aren't the point of the photo".
|
| This problem has _nothing_ to do with sensitivities, it's
| about teaching the machine to crop images in an intelligent
| way. Even if you weren't offended by the result of a
| machine cropping an image in a sexualized way, most folks
| would agree that cropping the image to the text on a jersey
| is not the right output of that model. Being offensive to
| women with American sensibilities (a huge portion of
| Twitter's users, I might add[0]) is a side effect of the
| machine doing a crappy job in the first place.
|
| [0] https://www.statista.com/statistics/242606/number-of-
| active-...
| acituan wrote:
| "Badness" is not a property of the object, it is created
| by the perceiving subject. What AI does is an attempt at
| scaling the prevention a particular notion of "badness",
| that suits its masters. In other words Twitter is just
| pushing another value judgement to the entire world.
|
| Even the value of "no one should get offended" is
| subjective, and in my opinion makes a dull, stupid world.
| Ultimately it is a cultural power play, which is what it
| is, just don't try to dress it in ethics.
| bastawhiz wrote:
| Badness is indeed a property of the output of this
| algorithm. A good image crop frames the subject of the
| photo being cropped to fit nicely in the provided space.
| A bad image crop zooms in on boobs for no obvious reason,
| or always prefers showing white faces to Black faces.
|
| You're attempting to suggest that the quality of an image
| crop cannot be objectively measured. If the cropping
| algorithm changes the focus or purpose of a photo
| entirely, it has objectively failed to do its job. It's
| as simple as that: the algorithm needs to fit a photo in
| a rectangle, and in doing so its work cannot be perceived
| as changing the purpose of the photo. Changing a photo
| from "picture of woman on sports field" to "boobs" is an
| obvious failure. Changing a photo from "two politicians"
| to "one white politician" is an obvious failure. The
| existence of gray area doesn't mean there is not a
| "correct" or "incorrect".
|
| > Even the value of "no one should get offended" is
| subjective, and in my opinion makes a dull, stupid world.
|
| You'd agree with the statement "I don't care if my code
| does something that is by definition racist"?
| acituan wrote:
| > If the cropping algorithm changes the focus or purpose
| of a photo entirely, it has objectively failed to do its
| job.
|
| You just changed the problem formulation to an objective
| definition of "purpose" and a delta of deviation that is
| tolerable. That's just kicking the can.
|
| > You'd agree with the statement "I don't care if my code
| does something that is by definition racist"?
|
| As a principle I don't respond to poorly framed
| questions.
| hncurious wrote:
| These puritan activists larping as tech savvy victorian
| moralists should be ignored. While the "male gaze" jibe sounds
| enlightened and a la mode, it implicitly denies that women
| (especially the lesbian ones) have a gaze of their own, or
| asserts that it has been colonised by the omnipotent male gaze.
|
| Don't take my word for it, here's noted feminist and professor
| Camille Paglia and Christina Hoff Sommers on the absurdity of
| the male gaze theory:
|
| https://www.youtube.com/watch?v=a3pjGd1v5xY
| oh_sigh wrote:
| Doesn't "the male gaze" imply there are other gazes?
| Otherwise, wouldn't it just be "the gaze"(as Sartre
| originally theorized)
|
| Also, feminism is a big enough field that you can certainly
| find a feminist who supports your viewpoints, whatever they
| may be. Pointing to two people who support your viewpoint
| doesn't mean that the matter is settled - I could point to
| probably a hundred feminist authors who firmly believe in the
| male gaze.
|
| Personally, "the male gaze" seems more of western female
| problem than a western male problem. Studies have shown that
| even anticipating that someone may look at you can increase
| feelings of objectification and negative mental states. I
| don't really know what can be done from a male's perspective
| if a woman may make herself feel bad by merely knowing that a
| man may possibly eventually look at her.
| dragonwriter wrote:
| > While the "male gaze" jibe sounds enlightened and a la
| mode, it implicitly denies that women (especially the lesbian
| ones) have a gaze of their own,
|
| No, the modifier implicitly _acknowledges_ that there are
| other kinds of gaze.
| jauer wrote:
| Yet somehow lesbians aren't the ones eyefucking my breasts
| when I'm at the grocery store.
| nomdep wrote:
| That you IMAGINE are eyefucking your breasts. Not the same
| jsjsbdkj wrote:
| Yeah, somehow I never get catcalled or followed around by
| lesbians, it's almost like being a creep and being
| attracted to women aren't inextricably linked?
| djoldman wrote:
| I think this is an interesting comment and I immediately
| thought to ask:
|
| What percentage of the people offending you by looking at
| you like that have you confirmed are male and what
| percentage have you confirmed are not gay?
|
| I'm asking without snark, I'm genuinely curious if you've
| asked people.
| SlowRobotAhead wrote:
| Is that a thing that happens to you? That sucks.
|
| Aside: Was the entry on your blog about being hugged by a
| significant other while in line at a hard store a fan
| fiction? Seems like you have to be on guard all the time as
| a woman in 2021, again, that sucks.
| oh_sigh wrote:
| Well, men do outnumber lesbians by something like 30:1.
| Also not sure if you have les-dar or how you know if a
| woman checking you out is a lesbian or is just critiquing
| your outfit in her mind.
| jb775 wrote:
| If you didn't get eyefucked you'd probably find something
| to complain about that.
| _jal wrote:
| > it implicitly denies that women
|
| Nonsense on stilts. You might as well claim that anti-smoking
| activists are wrong because nonsmokers sometimes get lung
| cancer.
|
| I'd highly suggest some reading on the topic if you want to
| address it; this isn't even a basic misunderstanding.
| shreyshnaccount wrote:
| are you living in society??? I'm tired of the HN crowd which
| wants to find BS counterarguments to EVERYTHING as if it's
| all a huge engineering problem that can be addressed by
| thinking from first principles. they need to understand that
| life as a human isn't all black and white, there's a lot of
| grey! and no, I'm not saying that questioning things is bad.
| being obnoxious and ignorant of society is.
| truthwhisperer wrote:
| Hi yes are right but we are winning :-)
| nitwit005 wrote:
| Fixing the AI to avoid such issues is indeed likely impossible,
| but adding a feature to correct the AI's decision would
| probably take a week or two of engineering effort.
| cryptoz wrote:
| The quote you cited says "no evidence the saliency model
| _explicitly encodes_ male gaze " emphasis mine. Your re-
| phrasing however changes the meaning in your question, "
| _historical biases not inherent in the model_ ".
|
| Explicitly encodes != inherent in. The difference between the
| two might answer your question. You can solve for something
| that is inherent in a system (ie, this), but was not explicitly
| encoded. They are trying to say it isn't their fault
| specifically, they find no evidence this was done on purpose or
| in a targeted way, but that it happens as an accident / lack of
| testing on the current model.
| stolenmerch wrote:
| They are very clear about 'intentional' vs. 'unintentional'
| harm in their judging metric[1], so I get that I think.
| However, there is no possible way a saliency model could crop
| an image without someone reading bias into it. E.g. only
| cropping a jersey, only cropping a face, not cropping at all,
| etc. can all be signs of historical bias. At some point it
| just becomes a rorschach test for the reviewer. Not saying
| discrimination isn't happening or can't happen, just saying
| this very tenuous and weak connection to "historical bias"
| interpretation lacks a certain rigor in a bug bounty program
| or model optimization.
|
| [1] https://hackerone.com/twitter-algorithmic-bias
| crummybowley wrote:
| It seems like we should just stop using ML for feeds.
|
| My twitter feed is always hate. I don't actually comment, or
| engage other tan reading. And I am continuously being suggested
| post that are clearly bias to somebodies agenda.
|
| I also fallow CNN breaking news. And the comments are complete
| trash. What is worse, is some small town thing happens, and now
| you have millions across the US and the world upset and outraged
| about it. While the issue was tragic, I am not sure it is healthy
| for society as a whole to be so connected to things that play
| nearly zero in their life.
|
| I know, I know. Somebody is going to say "but they do play a role
| in their life". And I am going to reply. The only role they play
| is in that they saw a tweet or a reedit post or a 12 second clip
| with a loaded title that tries to convey that this bad thing
| happening is "status quo" and it is so outrageous that it happens
| and we all need to stand up to do something about it.
|
| When in reality we are dealing with a world with ~8 billion
| people in it, where if 0.0000001% (8 people) of the population
| experience something bad to them in a given day, that we have
| systems in place to ensure that the vast majority know about it
| and get outraged, and are made to feel as if this is some huge
| problem that everybody needs to be involved in.
|
| Listen folks, shitty things are going to happen, and they are
| going to keep happening, and it is going to seem like the problem
| is going to get worse and worse as the population grows. But we
| will never have zero bad things happening, and being outraged
| over and over about something that simply won't ever be corrected
| is a huge burden on society as a whole.
|
| I challenge folks, especially the ones who get outraged at
| twitter post, reddit post, or stupid things said on HN to take a
| month off from social media. You will find the world is a much
| better place than you think it is currently.
| hatchnyc wrote:
| I am increasing convinced that _all_ of the harms of social media
| can be traced back the ML models used to sort and filter user
| content. The world of pure deterministic feeds ordered by date
| descending never had these problems--things would still "go
| viral" I guess, but it was humans sharing them deliberately to
| people they knew.
|
| I do not believe this problem can be fixed, nor do the companies
| involved have the incentive to do so beyond whatever minimal
| amount is necessary for PR purposes.
| blopker wrote:
| I agree that these companies cannot fix the harm they've
| caused. However, I have an alternative view point on why. I
| don't think the ML has much to do with it, although it doesn't
| help.
|
| These companies are just trying to increase their engagement
| metrics, like how long users are on the site, or how many ads
| they see on average. This was true before they started using
| ML, but maybe less obvious. What these companies found was that
| human nature, in general, gets addicted to outrage and gossip.
| They are just supplying that to us.
|
| It's not very different than tobacco companies. They found an
| addictive substance, that is not good for us, and are trying to
| get it out as much as possible. The problem is people 'want'
| this product, despite its negative effects.
|
| That's why these companies can't fix their product. It's like
| asking "How can we make tobacco healthy?". We can't.
| arikrak wrote:
| What about conspiracy theories that spread through WhatsApp? A
| lot of the harm of social media is because some people spread
| harmful things and now it's easier for them to do so.
| leppr wrote:
| Indeed. I think algorithmic social feeds are such a popular
| scapegoat simply because people don't understand them (since
| they're black-boxes and the tech is new).
|
| The common rhetoric that "harmful messages which trigger
| engagement are amplified by social media algorithms" is
| probably correct, but to think this is anything close to the
| root cause of the spread of "harmful ideas" on social media
| is ignoring the reality of social networks before algorithmic
| feeds.
|
| Notably, the most popular current examples of places where
| harmful ideas are spread (4chan, Gab, Parler, ...) are
| platorms that eschew algorithmic curation entirely. I don't
| believe hyperlinks to those platforms are being promoted by
| the mainstream platforms' algorithmic feeds, people find
| these places by themselves.
|
| Social media can be harmful mostly because it facilitates mob
| mentality once again. Centralized idea distribution platforms
| (newspapers, television) gave society a rest from undesired
| mob mentality, but now we have more freedom of association
| than ever. I suspect the best way to avoid it is to add more
| artificial curation in between human interactions, not less.
| earthboundkid wrote:
| BRB, inventing a time machine to go back and stop Reddit from
| inventing the up vote button.
| travoc wrote:
| Upvotes are a long ways from ML content engagement targeting.
| airstrike wrote:
| I'm willing to bet $100 reddit didn't invent upvotes.
| bigbonch wrote:
| To add to this I believe the fundamental issue is intent.
| Deterministic feeds and "organic" virality is trusting purely
| human intent. Modern algorithms muddy the network of intent and
| my tin foil hat take is that we're hard-wired to perceive
| everything as intent from some other human.
| tshaddox wrote:
| I suspect that the business model (advertising) is a more
| fundamental cause.
| lemoncucumber wrote:
| Put another way, the problem boils down to companies that only
| optimize for one variable (shareholder value) relying on
| algorithms that only optimize for one variable (engagement).
|
| Taken together, it results in a lot of negative externalities
| that get ignored at every level.
| leppr wrote:
| Put yet another way, the problem boils down to whole
| continents worth of people relying on one or two companies'
| proprietary services for their online social life.
|
| The web has a platform diversity problem.
| johnsmith4739 wrote:
| I have a sad realisation that "engagement" is more likely
| "enragement." Humans are activated best by anger. Joy? That
| is a big deactivator...
|
| The algorithmic approach to the feed reminds me a model I
| work every day with: human perception.
|
| Because there is an "algorithm" put between sensory
| perception and perception/awareness - and because the signal
| is highly processed - aaand the way processing is done is not
| available to our conscious awareness - you get all kinds of
| strange behaviours.
|
| Where you cannot control how stimuli are processed and
| perception formed, you are for all intents and purposes
| manipulated, or at least denied control.
|
| Unfortunately, any SM algo has to fit the same purpose -
| alter our perceptions and influence our behaviours. Guess I
| should delete my FB now...
| gotostatement wrote:
| > I have a sad realisation that "engagement" is more likely
| "enragement." Humans are activated best by anger. Joy? That
| is a big deactivator...
|
| This is interesting. Because if you have joy, you might be
| like "maybe I should just put this FB down and continue
| enjoying my life." But if something angers you, it
| activates you to stay on and hold on, instead of taking it
| easy which can lead to logging off.
| marcinjachymiak wrote:
| Personally, I am convinced that all of the harms of social
| media can be traced back to the Garden of Eden.
|
| People just decide to be bad sometimes. The perfect feed
| algorithm doesn't exist.
| rumblerock wrote:
| To take it one step further I'm convinced at this point that
| the social conundrum presented by Twitter and other social
| media platforms has extended far beyond the algorithm and the
| platforms themselves, and into how people interact in general.
|
| The reductionist, reactionary mode of thinking now rears its
| head in my real-world interactions. I wouldn't care as much if
| people kept that stuff on these social platforms that I spend
| next to zero time on, but I can't help but roll my eyes when my
| friends speak in "hot takes" as if I'm handing out certificates
| for internet points.
| jancsika wrote:
| > To take it one step further I'm convinced at this point
| that the social conundrum presented by Twitter and other
| social media platforms has extended far beyond the algorithm
| and the platforms themselves, and into how people interact in
| general.
|
| A cryptographer once mentioned attacking the PGP WoT by
| uploading a shadow WoT that has different keys with the same
| emails/names and signing-graph structure. Similar to what you
| note-- the real problems come when some bona fide WoT user
| fucks up and accidentally signs a key from the shadow graph.
| Even if it's just a single accidental signature before the
| shadow WoT is spotted and cleaned up, the admin would
| probably want to _quickly_ go out of band to actually fix the
| original WoT at that point-- otherwise perhaps the shadow
| admin got to the user first. :)
|
| (And of course if the shadow graph were uploaded
| incrementally over a wider period of time, the problems
| probably become a bigger pain in the ass.)
|
| The problem you mention is in the same category. But rather
| than being hypothetical, it is real and probably as bad an
| instance as someone could design given current technology. To
| make the WoT attack above comparable, I'm imagining a Debian
| user accidentally signing a shadow key, then receiving
| messages from that key that convince them Debian keyring is
| in fact the shadow key ring and they should use their
| permissions to destroy Debian.
|
| Edit: and I guess in keeping with the analogy, the purpose of
| the propaganda emails is to generate as much outrage in the
| recipient as possible, so that they may spend more time
| reading the entire email on the off chance they also spend
| one or two seconds extra looking at the ad in the email
| signature about a special on a nice pair of designer pants.
| omgitsabird wrote:
| > I am increasing convinced that all of the harms of social
| media can be traced back the ML models used to sort and filter
| user content.
|
| I don't think so. I think most harms of social media can be
| traced to how the people who actually pay for or make money off
| the service perceive value.
| easterncalculus wrote:
| I agree with this view, because it is ultimately these models
| which are behind the hyperviral nature of posts now, where they
| are recommended to people that _engage_ in it, positively or
| negatively. That whole idea of negative feedback loops is
| completely independent from explicit human design (these sites
| were never created to send you things you don 't like), and is
| arguably behind most of what people are complaining about on
| social media. There are separate features that people debate on
| (post deletion policies/capabilities, etc) but it is not nearly
| as widespread as the damage often caused by putting engagement
| over everything else.
| uyt wrote:
| I only know of one site that uses the model you proposed (pure
| deterministic feeds ordered by date descending) but it's the
| cesspool of the internet: 4chan
| austincheney wrote:
| That is certainly part of the problem, but a bigger earlier
| problem is voting mechanisms. Voting mechanisms put those
| algorithms on steroids and increase the potential for echo
| chambers.
|
| Worse than that voting mechanisms reward bad behavior. They
| discourage replies for fear of lost karma and as a convenience
| to those can't be bothered to use words whether for cowardice
| or ignorance.
| gotoeleven wrote:
| I applaud twitter's efforts to solve the problem of men wanting
| to look at boobs.
___________________________________________________________________
(page generated 2021-07-30 23:00 UTC)