[HN Gopher] Facebook to target harmful coordination by real acco...
       ___________________________________________________________________
        
       Facebook to target harmful coordination by real accounts
        
       Author : hobbesthompson
       Score  : 95 points
       Date   : 2021-09-16 16:10 UTC (6 hours ago)
        
 (HTM) web link (www.reuters.com)
 (TXT) w3m dump (www.reuters.com)
        
       | eplanit wrote:
       | Glenn Greenwald did a piece on exactly this topic just yesterday:
       | https://rumble.com/vmkc0j-democratic-officials-continue-to-t...
        
       | ballenf wrote:
       | I think we also need transparency around coordination of these
       | efforts with any government actors. At least to the extent the
       | efforts affect citizens.
       | 
       | Kind of like transparency reports with regard to data requests.
        
         | ManBlanket wrote:
         | Right so those with the privilege of deciding who the good guys
         | are and what constitutes misinformation are current political
         | parties and financial interests. Which is great, because
         | governments have never been responsible for orchestrating their
         | own attacks and campaigns of misinformation. I'm so glad we can
         | trust Facebook to have our best interest at heart.
        
           | noptd wrote:
           | At least there is some level of accountability and
           | transparency potential for elected officials. I don't see how
           | we would be better off by having a small number of private
           | individuals making these decisions for us.
        
         | roywiggins wrote:
         | Governments have set up departments to forward requests for
         | takedowns directly to Facebook, sidestepping whatever due
         | process they usually would require to get a court order:
         | 
         | "Watchdogs say that this coordination between governments and
         | online platforms lacks transparency and operates through an
         | alternative enforcement mechanism that denies due process of
         | law. In the vast majority of cases, the Cyber Unit doesn't file
         | a court order based on Israeli criminal law and go through the
         | traditional legal process to take down online posts. Instead,
         | the Unit makes appeals to the platform's content moderation
         | policies and community standards. The enforcement of these
         | policies though can be selective, devoid of different cultural
         | contexts, and flexible to the interests of the powerful."
         | 
         | https://prospect.org/world/how-secretive-cyber-unit-censors-...
        
           | pyronik19 wrote:
           | This coordination is being attacked in 1a lawsuites as
           | defacto free speech violation due to FB and other big tech
           | acting as agents of the state.
        
       | neonate wrote:
       | http://web.archive.org/web/20210916192204/https://www.reuter...
       | 
       | https://archive.is/lJoWA
        
       | ggggtez wrote:
       | Only about a decade too late.
        
       | phkahler wrote:
       | >> as it announced a takedown of the German anti-COVID
       | restrictions Querdenken movement.
       | 
       | I don't know if that group used any particularly bad tactics, but
       | on the surface that statement sounds like squashing free speech.
       | IDK about germany, but I'm sure FB will do that in the US too.
       | All because "it's our private platform and we are not the
       | government so no first amendment here".
        
         | mc32 wrote:
         | Nikki Minaj of all people was suspended by Twitter because she
         | mentioned friends of hers had a bad reaction to a particular
         | vaccine.
         | 
         | Like what the eff. People can't even freely mention an anecdote
         | anymore if it is contrary to the set narrative?
         | 
         | What the eff happened to us in the last two years?
         | 
         | It's frankly, disgusting and very concerning.
        
           | cjensen wrote:
           | That's not what happened. Minaj claimed "a friend of a
           | cousin" had a simply impossible side effect. This is pretty
           | clearly a simple case of an urban legend being shortened from
           | "friend of a friend of a friend" and so the rumor always gets
           | passed on as "friend of a friend" no matter how many links
           | there are in the chain.
           | 
           | Did you notice that you shortened it from "Minaj claimed a
           | friend of a cousin..." to "Minaj claimed a friend...?" Hey,
           | you're human too just like all of us. We suck at passing
           | along rumors.
        
             | mc32 wrote:
             | Let's say it's unsubstantiated rumor (she said it was her
             | (not 'a') cousin's friend) What every medical rumor gets
             | knocked out? Only the ones they don't like? What makes them
             | the right decision makers to decide?
             | 
             | It's no different from the banning of the Wuhan lab escape
             | theory that was banned because... I dunno some republican
             | happened to like it? Meanwhile lots of virologist believed
             | it should at least be investigated. But no, originally only
             | racists could consider it as a possibility.
             | 
             | No, Twitter can eff off.
        
           | SSilver2k2 wrote:
           | Twitter can suspend anyone and anything for any reason. You
           | play on someone else's server you play by their terms.
           | 
           | Nothing is stopping Nikki from hosting her own website.
        
             | throw_m239339 wrote:
             | > Twitter can suspend anyone and anything for any reason.
             | You play on someone else's server you play by their terms.
             | 
             | Yes, but then, why should there be laws that shield these
             | services from any legal responsibility as to what their
             | users post? The government shouldn't need to protect these
             | businesses either and let them off the hook because of the
             | scale of the moderation. it goes both ways. Private
             | companies can accept whatever they want, the government
             | doesn't have to protect them either, they are a private
             | company after all. They should bear all the risk.
        
               | [deleted]
        
               | renewiltord wrote:
               | Why does it go both ways? The symmetry isn't obvious. I
               | can kick you out of my house for whatever reason but I'm
               | not automatically complicit if I let you visit and you
               | pull out a rifle and shoot someone out the window when I
               | wasn't looking. That is right. And it should be that way.
        
             | mc32 wrote:
             | Sure. An individual can't count on a message being carried,
             | yeah we know, it's a private company and all...
             | 
             | But... it should concern people the concerted effort to
             | censor anything including the truth if it doesn't fit a
             | particular narrative.
             | 
             | What is Twitter decided hey, they wanna be on the side of
             | the police and now anyone reporting anything that goes
             | against the police narrative gets banned, true or made up.
             | Does that sound okay?
             | 
             | It's effed up of people think that that's okay because they
             | are a private company and it's their platform...
        
         | [deleted]
        
       | SteveGerencser wrote:
       | I'm surprised it's taken this long for them to get somewhat
       | serious about this. I've worked with clients who have hate
       | brigades that report everything they do and publish in an effort
       | to get their pages and ad accounts shut down. I've gone as far as
       | finding in infiltrating Facebook groups that openly talk about
       | this while planning their next attack. Reported the groups for
       | months and no one at FB cared.
        
         | Syonyk wrote:
         | > _Reported the groups for months and no one at FB cared._
         | 
         | Analysis: Engaged, active users. Encourage them to engage more!
        
         | dylan604 wrote:
         | Maybe try adding a letter to who you are contacting, FBI, to
         | see if it would get the response you are hoping.
        
           | ceejayoz wrote:
           | If the desired response is "Huh?", sure.
           | 
           | The FBI is not going to intervene in Facebook brigading.
        
             | dylan604 wrote:
             | Yeah, I read too much into this thinking that it wasn't
             | just mis-information campaigns but the other violent groups
             | posting their vitriol as well.
        
           | beebmam wrote:
           | The truth is that tech companies don't give a shit about
           | working with law enforcement and actively try to avoid ever
           | dealing with them at all.
           | 
           | The ONLY time a tech company works with police is when they
           | are obligated to, by law. It's actually quite a shame, I
           | think it's actively harming our society. Laws without
           | enforcement are worthless.
        
             | JasonFruit wrote:
             | The only time a tech company should work with police is
             | when they're required to by law, and not always even then.
             | What are we coming to when we side with police before even
             | knowing what law they're enforcing?
        
         | btilly wrote:
         | Without knowing who your clients are, or why they are being
         | targeted, I have no idea whose side I would be on.
         | 
         | For example what you are saying could be said by Cassie at
         | Crowdsurf against the #FreeBritney crowd. But given that she's
         | helping the public image of an unconstitutional and abusive
         | conservatorship means that I don't WANT FB to do what she
         | wants.
         | 
         | However it could also be said by someone representing the
         | latest person who has been canceled for failing to be
         | sufficiently politically correct. And in that case I'd be
         | sympathetic.
        
           | dudus wrote:
           | That's the problem with corporations deciding what is right
           | and what is wrong. Or what is considered hate speach.
           | 
           | Most people would agree Nazis are bad (except for the Nazis
           | themselves), then you have anti-vax. In my opinion bad but
           | clearly not an opinion shared by a large group of people.
           | 
           | We'll live in a world where free speech is controlled by tech
           | corps. Which is all well and dandy when their views align
           | with yours but horrific when they don't. The other option
           | would be to not have anyone policing speach which has shown
           | to be problematic as well.
           | 
           | The legal system needs to step up and own online speach
           | monitoring and removals
        
             | _jal wrote:
             | > The legal system needs to step up and own online speach
             | monitoring and removals
             | 
             | (I'm guessing you're US-based.) So instead of many poor
             | systems trying to do better, we'll end up with one uniform
             | system working poorly, that just became a nuclear-hot
             | political target implicating the 1st Amendment.
             | 
             | The only arguably positive thing that does is relieve
             | pressure on the platforms, while not allowing any
             | behavioral diversity and making any future moderation
             | adjustments a massive culture-war fight.
        
               | dudus wrote:
               | It puts the burden on judging what is or isn't protected
               | free speech or hate speach in the hands of actual judges
               | that are appointed for that purpose exactly.
        
             | JasonFruit wrote:
             | You were doing okay up to that last disastrous couple
             | sentences. The legal system most emphatically should not
             | engage in monitoring and removal of speech. It's bad enough
             | when private companies do it, and that should be opposed as
             | strongly as possible -- but government shouldn't begin to
             | touch the decision of what opinions are and are not
             | allowable.
        
               | dudus wrote:
               | What's a good solution then? Who should be the judge of
               | right vs wrong if not for actual judges?
               | 
               | Or you think no monitoring is the best scenario, even
               | with evidence of how speach has been weaponized on social
               | media to push for conflict.
        
           | mc32 wrote:
           | I think not knowing is a better framework from which to work
           | with to work out a neutral way to address misdeeds. We should
           | not take sides and should be impartial when making these
           | decisions (goose/gander).
        
             | vladTheInhaler wrote:
             | That's not true at all. Context is the _only_ way to make
             | any useful judgements. A couple examples:
             | 
             | Person A tackles person B to the ground and holds them
             | there against their will. Is that morally acceptable?
             | There's no way to tell.
             | 
             | If A is B's estranged ex-husband and is upset that she
             | hasn't returned his calls, most people would say it's
             | unacceptable behavior. If A is a bystander to a knife
             | attack by B on London bridge, most people would (and did)
             | say that it is justified.
        
               | TeMPOraL wrote:
               | In your first case: it is later revealed that the reason
               | for A's action was B making threats of violence towards
               | their kid and about to realize them.
               | 
               | The problem with third parties passing judgement is that
               | they often lack _full_ context.
        
               | vladTheInhaler wrote:
               | I completely agree. Outside of these toy examples, there
               | really is no way to know the complete truth. But I don't
               | think we should give up on doing _the best we can_ to
               | recover as much context as possible, and we certainly
               | shouldn 't fall into some sort of epistemic learned
               | helplessness and try to make all judgements from a
               | position of zero knowledge.
        
               | mc32 wrote:
               | Yes we can look at particulars when applying the law to a
               | particular case.
               | 
               | When we make out laws they should be impartial and not
               | put the thumb on the scale when writing them.
        
               | vladTheInhaler wrote:
               | I'm not sure what you think a law would look like that is
               | completely impartial. The very _existence_ of a law
               | represents a thumb on the scales of otherwise
               | unconstrained human behavior. What specifically should we
               | make laws impartial _to_?
        
               | mc32 wrote:
               | We don't have to consider ridiculous extremes. We need to
               | consider our philosophy, mores and ethics to inform laws.
               | 
               | We don't say, it's illegal to commit theft, well, unless
               | you're the government or the judge, then it's okay
               | because we know you must have good intentions.
        
               | vladTheInhaler wrote:
               | Okay, so we should not give exemptions to specific
               | categories of people who are a priori assumed to be good.
               | I think that's fair. Are you concerned that that's the
               | situation in the original context? Or would be, if we
               | somehow knew who the original commenter was talking
               | about?
        
               | mc32 wrote:
               | It would be healthier to not know the identity to avoid
               | introducing unnecessary bias in the decision.
               | 
               | It shouldn't be like: oh it was Joe the grocer, yeah he's
               | okay, let 'im go. Vs, oh it was Ernie the latrine digger,
               | he always makes my skin crawl; throw the book at him!
        
               | vladTheInhaler wrote:
               | So if the hate brigades were being launched by a group
               | with a long track record of bad-faith and abusive
               | behavior, you don't think that should inform your
               | decision making?
        
               | mc32 wrote:
               | Why not make a rule to address all brigading? why
               | targeted against groups you like or dislike? The groups
               | you like and dislike are not going to be concentric with
               | other people's so keep it consistent.
        
           | jjcon wrote:
           | > But given that she's helping the public image of an
           | unconstitutional and abusive conservatorship means that I
           | don't WANT FB to do what she wants.
           | 
           | Imo if Facebook wants to be less toxic (no pun intended?)
           | they would want to quell these as well. As long as we are
           | algorithmically deciding what gets views id prefer the bias
           | skew positive and not negative no matter how justified.
        
         | elliekelly wrote:
         | When you say "planning their next attack" do you mean a literal
         | attack on your client's security or do you mean a coordinated
         | facebook brigading type of "attack"?
        
       | mschuster91 wrote:
       | Finally. The first ones that seem to be hit were German anti-
       | vaxxers ("Querdenken", per
       | https://www.sueddeutsche.de/service/internet-facebook-loesch...).
        
       | scohesc wrote:
       | People should not expect pleasant discourse on social media. You
       | have every single statement, opinion, or reasoning scrutinized by
       | robots or vastly under-paid contractors - none of which
       | understand nuances of conversation like context, sarcasm, etc.
       | 
       | Social media is a cancer that's growing on society. We need to go
       | back to smaller online presences instead of a "global village" as
       | it were. Too many things to worry about in the world, and too
       | many companies want to sell you fear with advertising.
        
         | boplicity wrote:
         | One of the things about a small community is that there are
         | real social consequences for not staying within the confines of
         | polite behavior. People can get embarrassed, shunned, ignored,
         | etc. Their reputations are harmed, often leading to
         | consequences in terms of work, friends, family, etc.
         | 
         | On Facebook, people who are rude & impolite are rewarded with
         | more engagement. Real consequences are very rare.
         | 
         | It's a real problem.
        
         | mjr00 wrote:
         | > Social media is a cancer that's growing on society. We need
         | to go back to smaller online presences instead of a "global
         | village" as it were. Too many things to worry about in the
         | world, and too many companies want to sell you fear with
         | advertising.
         | 
         | I feel like this is a big part of why Discord has gotten hugely
         | popular. Communities are isolated and only semi-publicly
         | accessible; most critically, they're not indexed by Google. On
         | most servers, people are using aliases which are not (directly)
         | linked to their real-life identities, but they're still people
         | you get to know and befriend, unlike Reddit-likes where the
         | people commenting are mostly interchangeable. These things make
         | the internet feel a lot closer to how it was in the 90s and
         | early 00s, where you could talk freely with your friends
         | without worrying if someone with a grudge would take what you
         | wrote and turn it into a mob-driven character assassination.
        
       | TigeriusKirk wrote:
       | Facebook never should have started down this road. There's no
       | winning endgame for them here.
        
         | mcguire wrote:
         | What would a winning endgame look like?
        
           | Applejinx wrote:
           | Facebook getting to disavow everything they've been paid to
           | do, and accepting payment from a new set of bosses who turned
           | out to have more authority over them than their previous
           | paying customers had.
           | 
           | Basically, entering the witness protection program and
           | selling out those they've worked for in the past. Getting a
           | huge cash-out, reputation laundering, and going away to
           | preside over a dwindling number of hapless and heavily
           | surveilled 'users', reporting on their doings to the
           | authorities.
           | 
           | That is very much the winning endgame for Facebook. They
           | turned out not to be bigger than governments.
        
           | ManBlanket wrote:
           | Something like 1984, probably.
        
           | 10000truths wrote:
           | Depends on how you define 'winning', really. If 'growth at
           | all costs' is your metric, then sure, Facebook is winning. If
           | 'adherence to original vision' is your metric, then you could
           | argue that Facebook went astray a long time ago.
        
       | NoGravitas wrote:
       | "Harmful coordination by real accounts" appears to mean mass-
       | reporting and brigading. The truth is, that because Facebook's
       | moderation is so unreliable, with innocent comments frequently
       | leading to bans, while actual death threats and incitement to
       | violence are deemed "not in violation of our community
       | standards", mass-reporting and brigading are among the only
       | recourse real users have to get actual harmful content removed.
       | They're not reliable, of course, and they can be used by all
       | sides, but if Facebook is effective at this, we're looking at a
       | situation where Facebook only does something when Zuckerberg is
       | hauled in front of Congress over something. I'm glad I haven't
       | been on Facebook for many years now, but opting out isn't a
       | solution for everyone.
        
         | Y_Y wrote:
         | Is there anything to be said for leaving the harmful content
         | there? If the crapflood made people like facebook less, or get
         | more skeptical, or made the company get serious about not
         | shoveling manipulative garbage down people psyches then I'd be
         | pleased.
         | 
         | I know "deplatforming" seems like a good idea, and is effective
         | in the short term, it just strikes me as ultimately the wrong
         | level at which to attack anti-social behaviour on "social
         | networks".
        
           | colinmhayes wrote:
           | The crapflood just caused people to invade the capital
           | building and refuse to get vaccinated. If anything it made
           | them more engaged and less skeptical.
        
           | mcguire wrote:
           | " _If the crapflood made people like facebook less, or get
           | more skeptical, or made the company get serious about not
           | shoveling manipulative garbage down people psyches then I 'd
           | be pleased._"
           | 
           | I believe that experiment has already been run.
        
       | cblconfederate wrote:
       | So not ever 'real identity' can save us? Heaven forbid, could it
       | be that the serious facade that facebook sold to advertisers is
       | collapsing?
        
       | boplicity wrote:
       | Facebook can't even handle extremely basic moderation and spam
       | fighting.
       | 
       | For example, I reported a post today that was a blatant attempt
       | to steal my log-in information. Facebook's response: "it doesn't
       | go against one of our specific Community Standards."
       | 
       | This was for a post that was impersonating Facebook.
       | 
       | I've heard very similar stories from many other people.
       | 
       | Obviously, Facebook just can't handle even the most basic
       | problems to do with moderation. There are so many problems on
       | their platform that go beyond the most obvious attempts at fraud
       | and scams. Yet, if they can't properly handle the most obvious
       | scams, how can we trust them to properly moderate anything at
       | all?
        
         | agolliver wrote:
         | At 2 am last night a bot impersonating a family member added
         | every one of their friends on facebook & sent them messages. I
         | reported it as a fake account, a ticket which was instantly
         | closed.
         | 
         | Facebook sent a notification later to them that the account was
         | not impersonating anyone, no action would be taken, and there
         | was no way to appeal.
         | 
         | Interestingly, messenger did splat a bunch of pretty good
         | warnings on top of the DM they sent me:
         | https://i.imgur.com/gigUA7G.png
        
         | georgeecollins wrote:
         | When Facebook talks about "real accounts" they mean only
         | accounts that are not very obviously fake in a way an algorithm
         | can detect. It's unquestionable that it is easy to set up a
         | fake account in FB and it always has been. If they tried
         | harder-- and they almost certainly will have to due to
         | political pressure-- it will still be pretty easy to find more
         | sophisticated ways to fake an account.
         | 
         | With an advertising model there are always going to be
         | incentives to fake accounts, and disincentives to FB to close
         | any account. The simple way to stop fake accounts would be to
         | introduce even a small cost to have an account which would make
         | faking accounts not cost effective. But that's not their model.
        
         | 2OEH8eoCRo0 wrote:
         | Are you kidding me? Of course they can but it's complicated.
         | They don't want to lose money. They don't want to be accused of
         | censorship for moderating. If they "moderate" certain people
         | too much they might bring regulation. Etc.
        
         | mftb wrote:
         | Regarding fake accounts, at the beginning of 2021 I saw this
         | story a number of places -
         | https://finance.yahoo.com/news/facebook-
         | disables-1-3-billion.... That in 2020 fb shutdown 1.3 billion
         | (with a b, fake accounts). I can no longer find a good number
         | for the total size of fb's membership. I think around this time
         | I saw it reported as 4 billion. Does anyone have such a number?
         | If the four billion number were accurate, then when this story
         | ran fb was admitting that something like 20-25% of the platform
         | up until then was fake.
        
       | sayonaraman wrote:
       | This is great, Facebook should add "Two Minutes Hate" feature to
       | the user timeline to address the spread of misinformation by
       | these enemies of the people.
        
         | [deleted]
        
           | [deleted]
        
         | literallyaduck wrote:
         | We have always been at war with misinformation.
        
           | Applejinx wrote:
           | Facebook most certainly has not. You can get paid very
           | handsomely for facilitating misinformation that wouldn't
           | otherwise catch on without a lot of surreptitious advocacy,
           | preferably astroturfed. The cash value of this has everything
           | to do with how well you can get away with it. It's just
           | market dynamics in action.
           | 
           | They are simply running into externalities, that's all.
        
       | tomohawk wrote:
       | Perhaps they should target harmful coordination by themselves
        
       | intended wrote:
       | From what I can tell, Facebook's internal research on mitigation
       | is leaning towards studying the connections people have to
       | determine the threat profile - not just the individuals posting
       | habits.
       | 
       | Can't say more, because it's a guess based on posts and job
       | requirements.
       | 
       | So front/standard moderation is outsourced and more advanced
       | threat detection is looking at large coordinated networks.
       | 
       | It's a huge pity that the state of the art in modern troll combat
       | is behind NDAs.
       | 
       | I'll also admit, that as poor security through obscurity is, it's
       | a useful speed bump here.
       | 
       | It's kinda odd to contemplate how this space will evolve.
        
         | Applejinx wrote:
         | >From what I can tell, Facebook's internal research on
         | mitigation is leaning towards studying the connections people
         | have to determine the threat profile - not just the individuals
         | posting habits.
         | 
         | This is not unreasonable. I don't think for a second it's just
         | organic trolling that's causing them problems. They have taken
         | money from literal nation-states attempting to wage war by
         | informational means and crush their enemies. Genocides in small
         | countries doesn't begin to sum up the real threat profile here.
         | Facebook could bear responsibility for actively aiding efforts
         | to plunge the globe into hot war, if their paying customers got
         | ambitious and foolish enough.
         | 
         | Small wonder they're trying to get things under control, but
         | under control THEIR way. If they don't mitigate, they're liable
         | to end up in the Hague.
        
         | cstoner wrote:
         | > It's a huge pity that the state of the art in modern troll
         | combat is behind NDAs.
         | 
         | I mean... do _you_ have any ideas about how to fix this sort of
         | thing that would survive being published for the world to see?
         | 
         | I don't mean to offend you or anything, I just want to point
         | out that the second you go public with your rules about how
         | these things are detected, the people you are targeting will
         | adjust their behavior to evade them.
         | 
         | I think that the secrecy is unfortunately a hard requirement
         | until perhaps we could all coordinate into a global "behavioral
         | score" system. And to be honest, that sort of shit is
         | terrifying, so we should probably never do it.
        
           | intended wrote:
           | No I fully agree, which is why I made a mention of security
           | through obscurity.
           | 
           | However, I will make the case that something of this
           | magnitude should be available to the public.
        
           | 13415 wrote:
           | > _do _you_ have any ideas about how to fix this sort of
           | thing that would survive being published for the world to
           | see?_
           | 
           | I would try to come up with a rule-based system that
           | _actually_ detects the bad behavior they don 't want to have.
           | Of course, people are then free to circumvent that system by
           | not behaving badly.
        
             | renewiltord wrote:
             | This is pretty much the equivalent of "I would try to make
             | a rocket that just lands on Mars instead of crashing". All
             | I gotta do is make it take off instead of not taking off.
        
           | mzs wrote:
           | Sure:
           | 
           | Settings > Timeline                 X chronological       X
           | only what your friends share
        
             | DavidPeiffer wrote:
             | I'm curious how things look for people who are only friends
             | with people of their extreme political persuasion and only
             | likes/follows similarly extreme pages? Does giving some
             | variety of relatively less extreme viewpoints cause a
             | reduction extreme viewpoints?
        
         | ggggtez wrote:
         | There certainly is academic research that touches on these
         | topics. Honestly, I'm sure the academics are moving faster than
         | FB given that this has been a known problem for over a decade,
         | and has only gotten worse over time.
        
           | intended wrote:
           | I guarantee that academics have been fighting to get data
           | from FB, and are behind the curve.
           | 
           | I've read papers which specifically highlighted this lacuna.
        
           | mcguire wrote:
           | Not been involved in this specific issue, but the problem is
           | that academic research tends to be limited unless they have
           | access to, say, Facebook's internal data. In which case
           | they're under the NDAs
        
         | bjt2n3904 wrote:
         | > [...] studying the connections people have to determine the
         | threat profile - not just the individuals posting habits.
         | 
         | Well that's just TERRIFYING. My account can get flagged... Not
         | because of what I say, but because of who I know?
         | 
         | Talk about a way to unperson someone. Make it so that even
         | associating with them causes the social graph to collapse.
        
           | GuB-42 wrote:
           | I think is is more about detecting highly connected groups
           | and finding what they have in common.
           | 
           | For example, if there is a group where each member connects
           | to most other members and nazism is the common denominator,
           | then it is identified az a nazi group. If you connect to a
           | large part of the group, then you are a potential nazi. If
           | you know someone in that group and most of your other
           | contacts are unrelated, you are probably not a nazi.
           | 
           | It is not that someone is marked as a supernazi and turns
           | everyone he touches into a nazi.
        
           | daenz wrote:
           | User bjt2n3904 has expressed a strong negative reaction to
           | threat profiling. Increase threat profile score by .4
        
         | [deleted]
        
       | dudus wrote:
       | So Facebook will (in addition to policing free speech) police the
       | right of assembly. There's no way this can go wrong.
        
         | JasonFruit wrote:
         | So, it's true that the Constitution doesn't protect rights from
         | infringement by private businesses, but you're onto something
         | important. The rights of free speech and free assembly are
         | critical to a functioning republic, and their curtailment ---
         | even legally, and by private corporations --- is doing great
         | harm to our tottering system.
         | 
         | I don't want government to force businesses to allow speech,
         | but we need to actively oppose private enterprises when they
         | limit speech, and support alternatives that embrace freedom.
        
         | void_mint wrote:
         | Websites have always moderated incoming content.
        
       | plumeria wrote:
       | The Google Play Store needs this as well...
        
       | yosito wrote:
       | On the one hand, I can see how using this to combat organized
       | misinformation could be a good thing. Combating organized vaccine
       | misinformation is the obvious current example. On the other hand,
       | it's a bit dystopian to have a platform used as a public square
       | that cracks down on organizers they don't like.
        
         | pyronik19 wrote:
         | I'm old enough to remember when stating covid came out of a
         | wuhan lab was "dangerous misinformation" that could get you
         | removed from social media. Not so much "misinformation"
         | anymore...
        
       | btilly wrote:
       | Here is one of the challenges with this space. What can be used
       | to suppress misinformation can also be used to suppress
       | information as well. Doing nothing has been weaponized. Doing
       | something will be as well. And we have plenty of examples of
       | things which have been ruled "harmful misinformation" that later
       | turned out not to be. With consequences for those who posted what
       | later turned out to be mainstream.
       | 
       | A few examples. There is a reasonable possibility that COVID-19
       | escaped from a lab. Vaccine passports for COVID-19 are likely to
       | become a thing. The conservatorship of Britney Spears is
       | unconstitutional, and has been a vehicle for grand larceny.
       | 
       | So how do we suppress harmful misinformation, such as that the
       | 2020 election was stolen through widespread fraud. And not
       | provide tools that can be misappropriated against truths which
       | are inconvenient to people with money and influence?
        
         | dredmorbius wrote:
         | It's almost as if algorithms and procedures aren't enough, that
         | values and judgements matter.
        
         | makomk wrote:
         | Any attempt to suppress "harmful misinformation" will be seen
         | as partisan because, well, it pretty much is. There was a huge
         | amount of utter bullshit pushed on social media about the 2016
         | election being stolen by fraud, including stuff that the author
         | obviously could not possibly have worked as described or been
         | used to steal the election, and a large proportion of the US
         | population literally believed Russians had hacked the voting
         | tallies. It obviously wasn't harmless either: someone
         | radicalized by Facebook went and shot a Republican senator, and
         | it was only through masses of luck and intrusive medical
         | interventions that he survived. Yet the only thing the
         | mainstream media showed an iota of concern for was the fact
         | that anyone objected to this.
         | 
         | Hell, even in the run-up to the 2020 elections the press were
         | pushing the narrative that it'd be impossible to know the
         | results weren't hacked and the election was valid - right up
         | until it became clear Trump had lost, at which point it became
         | absolutely certain they were valid and the audit chains and
         | processes so robust only a conspiracy nut would question them.
         | The same kinda happened in reverse in 2016.
        
           | btilly wrote:
           | I'm pretty sure that the "senator" you are naming was
           | Congressman Steve Scalise. I have not specifically heard that
           | the shooter, James Hodgkinson, believed in conspiracy
           | theories about a stolen 2016 election. But he may have.
           | 
           | However the big difference between partisan beliefs about the
           | two elections is this. After 2016, Democrats mostly believed
           | that they lost at the polls due to an effective
           | disinformation campaign run by the Russians. After 2020,
           | Russians believed that they lost due to widespread vote
           | counting fraud. As
           | https://www.rollcall.com/2021/02/24/partisan-voters-claim-
           | we... says, Democrats and Republicans believe this by almost
           | exactly the same margins.
           | 
           | But _what_ is believed matters. Democrats may have been
           | furious, but they believed in rule of law. Radical
           | Republicans, by contrast, attempted to overturn the election
           | via an insurrection.
        
         | sixothree wrote:
         | Let's not pretend Facebook ever was or ever will be some
         | bastion for democratic values.
         | 
         | We shouldn't expect that not "suppressing information" is ever
         | in their best interest. Or that it would be something new for
         | them. It seems like fantasy to think in those terms.
        
         | pessimizer wrote:
         | > So how do we suppress harmful misinformation, such as that
         | the 2020 election was stolen through widespread fraud. And not
         | provide tools that can be misappropriated against truths which
         | are inconvenient to people with money and influence?
         | 
         | We recognize that the "we" you're referring to are the "people
         | with money and influence." "Harmful misinformation" from their
         | perspective is information that they dispute either factually
         | _or through implication or perspective_ that could harm them,
         | the people with money and influence.
         | 
         | > So how do we suppress harmful misinformation, such as that
         | the 2020 election was stolen through widespread fraud.
         | 
         | With clear and extensive explanations and transparency. The
         | people who insisted loudly that there was no widespread fraud
         | the moment the election ended in their favor were operating
         | with as little factual, auditable information as the people who
         | were insisting that there was. The people who were insisting on
         | a fraud narrative were of course consuming a lot more
         | misinformation, but both sides were pretending that they were
         | knowledgeable about something they weren't, at all.
        
           | btilly wrote:
           | First, the "we" that I was referring to was "we as a society"
           | and, more specifically, "we as technologists". As in, how can
           | we create a technology and social norms that both encourage a
           | fact-based narrative while being resistant to political
           | manipulation.
           | 
           | Also to your specific example of the election, a lot of
           | people who spoke out fairly quickly against the fraud
           | information were operating on the factual, auditable
           | information that both sides were presenting their data to
           | judges of various persuasions around the country, and the
           | judges were virtually unanimously concluding that there was
           | no case. And then various recounts began coming back,
           | likewise concluding that there was no fraud.
           | 
           | I don't know what standard of evidence you think people
           | should have spoken out at. But that seemed at the time to be
           | a reasonable level of evidence. And it still does. The judges
           | in this case were literally a collection of people chosen to
           | be trustworthy, with varying political alignments, who were
           | making informed decisions on the basis of more data than I
           | have, and consistently came to the same conclusion.
           | 
           | A similar kind of thing for fact checks would be a standard
           | that I could be comfortable with. But it has to be similar. A
           | collection of independent people. Chosen on the basis of
           | methodology. With different political alignments. And it is
           | only when they broadly agree that we are willing to impose
           | rules.
        
       | anderson1993 wrote:
       | We know all social media is hit with attempted manipulation, the
       | hard part is figuring out how and why. I doubt Facebook can
       | figure it out.
        
         | daenz wrote:
         | With the current logic that Facebook is using, "manipulation"
         | can mean any behavior that they don't endorse. For example
         | someone is "manipulating" their network by posting anti Biden
         | memes.
         | 
         | I recognize that I am extrapolating real group behavior to real
         | individual behavior, but I think that extrapolation is
         | warranted given that we've just made the jump from group bot
         | behaviour to group people behavior.
        
         | elliekelly wrote:
         | I find it entirely implausible that facebook, with their vast
         | troves of data and an army of the very best computer scientists
         | money can buy, is incapable of reliably identifying posts and
         | users responsible for mass manipulation on their platform.
         | Unwilling? Sure. Unable? Not a chance.
        
         | yosito wrote:
         | It's a bit hard for them to figure it out when they're one of
         | the main culprits and depend on manipulation for profit.
        
       | drummer wrote:
       | Dumpsterfire is flaring up again.
        
       | slim wrote:
       | Facebook is threatening freedom of association
        
       | mwint wrote:
       | So they're going to use this to stop people spreading harmful
       | misinformation, like, say, the conspiracy theory that governments
       | would create vaccine passports, or that a virology lab in Wuhan
       | might have been involved in a virus that came out of Wuhan.
       | 
       | Or that masks are a good idea.
       | 
       | Dissent from the popular opinion is good and healthy, going both
       | ways.
        
         | shuntress wrote:
         | > Dissent from the popular opinion is good and healthy, going
         | both ways
         | 
         |  _" You gotta hear both sides"_ is extremely harmful when it is
         | used to equate massively unequal opinions.
         | 
         | Of course dissenting opinions should be heard and assessed but
         | it is also obviously absurd to treat some opinion such as "I
         | think that we should vacuum the floors to prepare for guests"
         | with the same weight and validity as "I think we should murder
         | our guests when they arrive that way we wont have to worry
         | about vacuuming"
        
           | [deleted]
        
           | silicon2401 wrote:
           | So is this counter-argument. Who decides what's absurd? Two
           | people may have completely different and legitimate views on
           | what's absurd.
        
           | colpabar wrote:
           | Make whatever strawman analogies you want, but the fact of
           | the matter is that massive online communication platforms
           | like facebook are increasingly censoring dissenting opinions.
           | One day, an opinion that you hold will become "dangerous",
           | and you'll have to choose between conformity or being
           | excluded from our increasingly digital society.
           | 
           | edit: why do we keep giving these companies the benefit of
           | the doubt when they continue to lie about _everything_?
        
             | shuntress wrote:
             | Don't try to change the subject. My comment is not about
             | facebook, censorship, corporate-run dystopias, or thought-
             | crime exile.
             | 
             | Move the goalposts all you want but the fact is that all
             | opinions are not equal in merit.
             | 
             | If you do actually want to have a good-faith discussion
             | about how to limit the reach of worse opinions while
             | increasing the reach of better opinions I would be happy to
             | hear your suggestions.
             | 
             | Or if you would prefer to discuss how to find agreement on
             | what we consider a "good" or a "bad" opinion, I would start
             | by offering that I think the opinion _" face masks do not
             | prevent the spread of airborne respiratory infections"_
             | should be considered significantly less reasonable than the
             | opinion _" face masks are effective at preventing the
             | spread of airborne respiratory infection"_.
        
               | vorpalhex wrote:
               | Our leaders at the start of 2020 explicitly said to not
               | buy or wear masks [1] in contrast to well established
               | research supporting masks from the 1990s SARs epidemic.
               | 
               | People supporting masks were censored for misinformation.
               | 
               | So you would have been one of those censored at the time,
               | despite being backed by the evidence and correct.
               | 
               | What makes you think such mistakes won't happen again?
               | 
               | [1] - https://www.cnn.com/2020/02/29/health/face-masks-
               | coronavirus...
        
               | ManBlanket wrote:
               | It's funny how we look at opinions as reasonable mostly
               | out of convenience to our own foregone conclusions these
               | days instead of statistics from which we derive evidence
               | that substantiates our conclusions. This guy likes masks,
               | he's one of the good ones. Not one of those baddies who
               | don't. Let's ignore the fact we have almost 2 years of
               | global data pertaining to mandates, transmission, and
               | death rates, and decide who we agree with based on which
               | tribe they hail from. Super reasonable.
        
               | colpabar wrote:
               | I don't think I'm changing the subject, I am asking you
               | to look at the bigger picture. Your comment may not have
               | been about facebook specifically, but we are in a thread
               | about a new facebook initiative regarding yet another
               | form of censorship. You list an opinion that you say has
               | more merit than another, and fine, let's say I agree. My
               | problem with looking at things through such a small lens
               | is that "merit" seems pretty subjective, and if we
               | continue to stand by as we let these tech companies
               | decide what merit means, one day they will go too far,
               | and it'll be too late.
               | 
               | Here's an example of two opinions that I think are
               | unequal. "the government has the right to confine people
               | who have not broken any laws to their homes" and "the
               | government does not have the right to confine people who
               | have not broken any laws to their homes." In Australia,
               | the government has decreed that the first opinion has
               | more merit than the second. Should facebook follow suit,
               | and censor anyone in australia who disagrees?
        
               | shuntress wrote:
               | > I am asking you to look at the bigger picture > My
               | problem with looking at things through such a small lens
               | is that "merit" seems pretty subjective
               | 
               | Ok, the bigger picture with a bigger lens is this: _How
               | do you slow the spread of harmful ideas?_
               | 
               | You agree that some ideas are "better" than others. I
               | think you would also agree that there is no simple
               | definition over what "better" exactly means. It's complex
               | and often elicits complex discussion.
               | 
               | My point, that you are trying again to skip over, is that
               | presenting any idea as if it is inherently equal in merit
               | to any other idea is fundamentally bad. To be specific, I
               | think this because I believe that good ideas will
               | eventually prove themselves out over time (even if they
               | spread very slowly) while bad ideas will tend to rely on
               | rapid spread to reach critical mass before they are
               | disproven.
        
               | jmaygarden wrote:
               | The post that is now flagged was referring to Dr. Fauci's
               | March 8, 2020 statement that "there's no reason to be
               | walking around with a mask." Dr. Fauci made that
               | statement in a context of trying to ensure that enough
               | protective equipment was available for frontline health
               | workers at a time when there were runs on toilet paper in
               | stores.
               | 
               | I believe you are mischaracterizing the argument that was
               | made. Unfortunately, we may no longer view the original
               | post because your opinion has apparently been deemed more
               | correct.
        
               | mcguire wrote:
               | Ok, apparently this is the hill I'm going to die on.
               | 
               | Up through roughly April-May 2020, many, if not most,
               | epidemiologists and virologists believed that masks would
               | not help the situation: they thought respiratory viruses
               | were spread through large droplets produced by
               | symptomatic individuals and that physical separation,
               | sanitation, and behavior would work as well as trying to
               | convince people to were useful masks consistently and
               | correctly. (Earlier today, I walked past a woman wearing
               | a bandana tied around her head. Below her nose. Why!?)
               | 
               | After that time, reports began to appear showing
               | coronavirus could be spread asymptomatically, by normal
               | breathing and speech, in an aerosol form that could stay
               | airborne for long times. Under those situations, masks
               | are the only solution.
               | 
               | The "ensure that enough protective equipment was
               | available for frontline health workers" thing was mostly
               | a response to "but it couldn't hurt" thinking.
               | 
               | "Then there is the infamous mask issue. Epidemiologists
               | have taken a lot of heat on this question in particular.
               | Until well into March 2020, I was skeptical about the
               | benefit of everyone wearing face masks. That skepticism
               | was based on previous scientific research as well as
               | hypotheses about how covid was transmitted that turned
               | out to be wrong. Mask-wearing has been a common practice
               | in Asia for decades, to protect against air pollution and
               | to prevent transmitting infection to others when sick.
               | Mask-wearing for protection against catching an infection
               | became widespread in Asia following the 2003 SARS
               | outbreak, but scientific evidence on the effectiveness of
               | this strategy was limited.
               | 
               | "Before the coronavirus pandemic, most research on face
               | masks for respiratory diseases came from two types of
               | studies: clinical settings with very sick patients, and
               | community settings during normal flu seasons. In clinical
               | settings, it was clear that well-fitting, high-quality
               | face masks, such as the N95 variety, were important
               | protective equipment for doctors and nurses against
               | viruses that can be transmitted via droplets or smaller
               | aerosol particles. But these studies also suggested
               | careful training was required to ensure that masks didn't
               | get contaminated when surface transmission was possible,
               | as is the case with SARS. Community-level evidence about
               | mask-wearing was much less compelling. Most studies
               | showed little to no benefit to mask-wearing in the case
               | of the flu, for instance. Studies that have suggested a
               | benefit of mask-wearing were generally those in which
               | people with symptoms wore masks -- so that was the advice
               | I embraced for the coronavirus, too.
               | 
               | "I also, like many other epidemiologists, overestimated
               | how readily the novel coronavirus would spread on
               | surfaces -- and this affected our view of masks. Early
               | data showed that, like SARS, the coronavirus could
               | persist on surfaces for hours to days, and so I was
               | initially concerned that face masks, especially ill-
               | fitting, homemade or carelessly worn coverings could
               | become contaminated with transmissible virus. In fact, I
               | worried that this might mean wearing face masks could be
               | worse than not wearing them. This was wrong. Surface
               | transmission, it emerged, is not that big a problem for
               | covid, but transmission through air via aerosols is a big
               | source of transmission. And so it turns out that face
               | masks do work in this case.
               | 
               | "I changed my mind on masks in March 2020, as testing
               | capacity increased and it became clear how common
               | asymptomatic and pre-symptomatic infection were (since
               | aerosols were the likely vector). I wish that I and
               | others had caught on sooner -- and better testing early
               | on might have caused an earlier revision of views -- but
               | there was no bad faith involved."
               | 
               | "I'm an epidemiologist. Here's what I got wrong about cov
               | id."(https://www.washingtonpost.com/outlook/2021/04/20/ep
               | idemiolo...)
        
               | jmaygarden wrote:
               | Fauci himself told The Washington Post that mask supply
               | was a motive back in July 2020. So, it was a combination
               | of two factors as you rightly point out. Thank you for
               | correcting my omission.
               | 
               | "We didn't realize the extent of asymptotic spread...what
               | happened as the weeks and months came by, two things
               | became clear: one, that there wasn't a shortage of masks,
               | we had plenty of masks and coverings that you could put
               | on that's plain cloth...so that took care of that
               | problem. Secondly, we fully realized that there are a lot
               | of people who are asymptomatic who are spreading
               | infection. So it became clear that we absolutely should
               | be wearing masks consistently."
               | 
               | https://www.washingtonpost.com/video/washington-post-
               | live/fa...
        
             | mcguire wrote:
             | I hold many dangerous ideas. I also am not terribly shy
             | about spreading them.
             | 
             | But if this:
             | 
             | https://i.guim.co.uk/img/media/e09ac35bd7596e18cb21562bcb4b
             | 0...
             | 
             | ever becomes my chosen way of doing so, I can only _hope_
             | someone censors me. And takes me to a nice, comfortable
             | assisted living facility where I cannot hurt myself or
             | others.
             | 
             | P.s. Ever heard of the Gish Gallop?
        
       ___________________________________________________________________
       (page generated 2021-09-16 23:00 UTC)