[HN Gopher] Birdwatch, a community-based approach to misinformation
       ___________________________________________________________________
        
       Birdwatch, a community-based approach to misinformation
        
       Author : razin
       Score  : 65 points
       Date   : 2021-01-25 18:20 UTC (4 hours ago)
        
 (HTM) web link (blog.twitter.com)
 (TXT) w3m dump (blog.twitter.com)
        
       | hezag wrote:
       | > _[...] we're designing Birdwatch to encourage contributions
       | from people with diverse perspectives, and to reward
       | contributions that are found helpful by a wide range of people._
       | 
       | > _For example, rather than ranking and selecting top notes by a
       | simple majority vote, Birdwatch can consider how diverse a note's
       | set of ratings is and determine whether additional inputs are
       | needed before a consensus is reached. Additionally, Birdwatch can
       | proactively seek ratings from contributors who are likely to
       | provide a different perspective based on their previous ratings._
       | 
       | > _Further, we plan for Birdwatch to have a reputation system in
       | which one earns reputation for contributions that people from a
       | wide range of perspectives find helpful._
       | 
       | https://twitter.github.io/birdwatch/about/challenges/
        
       | [deleted]
        
       | Pfhreak wrote:
       | I'm curious what mechanisms will be in place, and how effective
       | they will be, at preventing dogpiling on people using this
       | system.
       | 
       | As a specific example, as a nonbinary person, I'm constantly
       | running into people online who tell me there are only two
       | genders, or that singular they is some sort of new concept. My
       | concern with a consensus system is that it will be used to shut
       | down people like me (or trans folk, or socialists, etc. etc.
       | etc.)
       | 
       | How do you build a consensus system that protects minority
       | persons and also weeds out misinformation?
       | 
       | Edit: Downvotes for... being non-binary I guess?
        
         | cwkoss wrote:
         | That's a great example of how this could go wrong. I hope the
         | folks at Twitter are keeping examples like yours in mind as
         | they develop this.
        
       | whymauri wrote:
       | Reminds me of the Overwatch moderation system used by Valve to
       | collect cheater data from human evaluators in Counter-Strike.
       | Eventually, they leveraged that data to improve their automated
       | cheat detection systems. [0]
       | 
       | Although Twitter's problem is way harder, IMO.
       | 
       | [0] https://www.youtube.com/watch?v=ObhK8lUfIlc
        
         | 5AMUR41 wrote:
         | This was actually a genius system on Valve's part. I think what
         | makes these two systems different are a few things:
         | 
         | -Overwatch at least had some method of selecting people that
         | "knew about the game." People that had never ranked
         | competitively in CSGO couldn't make decisions if someone was
         | cheating, whereas it seems that the hopes for this new Twitter
         | feature will be that anyone can "fact-check" a tweet. Even if
         | evidence is required, it is beyond easy to find secondary
         | sources that skew facts or statistics into a different
         | connotation.
         | 
         | -Whether or not someone is cheating is pretty binary. They
         | either are or they aren't. One of my biggest concerns for
         | Birdwatch is that it was likely be used on tweets that aren't
         | binary statements of "write/wrong" facts, and will likely be
         | used in mass-reports of those of other political stances. That,
         | or shitposting
        
       | thimkerbell wrote:
       | Muddying the discourse is next-generation trolling.
        
       | neartheplain wrote:
       | How will Twitter prevent the sort of organized brigading and
       | domination of the always-online crowd which plagues such
       | crowdsourced review programs?
        
         | kyleblarson wrote:
         | They won't prevent it. This is clearly by design on Twitter's
         | part given that the always-online bluecheck crowd leans so far
         | left.
        
         | manojlds wrote:
         | I liked some blockchain / crypto based ideas people were having
         | - like you have to use crypto to even say something is fake or
         | not. A journalist has to "pledge" crypto that a news is not
         | fake and the smart contract releases the amount back based on
         | certain evaluations like if the post was voted as fake or not
         | etc. (I am sure as described there will be holes, but I like
         | the direction it was heading.)
        
           | jethro_tell wrote:
           | Lol, now you have two problems.
        
           | fwip wrote:
           | That just becomes "rich people decide."
        
             | qertoip wrote:
             | Not good indeed. But better than "poor people decide".
        
               | fwip wrote:
               | Why?
        
         | Pfhreak wrote:
         | This is the central question. If someone isn't asking, "What
         | would a malicious user do with this system?" and inserting
         | their favorite internet bogeymen as a thought experiment, they
         | haven't finished their design.
         | 
         | This is doubly true for safety systems. I don't see much of the
         | press release focused on how abuse of the safety system will be
         | avoided beyond, "We'll eventually let it be reviewed by outside
         | parties."
        
           | ourlordcaffeine wrote:
           | I'm pretty sure this project will just turn into opposing
           | groups marking anything they disagree with as disinformation,
           | the bigger group succeeding in having control of the
           | narrative.
           | 
           | What's to stop a flood of Russian accounts gaining control of
           | what's marked as disinformation?
        
       | olah_1 wrote:
       | I just want to point out that Minds[1] has a pretty clever
       | content moderation policy that involves an appeal process with
       | randomly selected users of the platform making blind judgements.
       | 
       | I haven't been part of the process myself, nor have I used the
       | platform yet at all. But this feature sounds quite good in
       | theory.
       | 
       | [1]: https://www.minds.com/content-policy
        
       | herewegoagain2 wrote:
       | Did anybody even ask for fact checking notes by Twitter? I don't
       | think so.
       | 
       | Whatever the Twitter fact checking note says, you should still
       | not take any information from the internet at facee value.
        
       | markmiro wrote:
       | A lot of people are wondering how this will stop misinformation.
       | I agree that we can't crowdsource truth. But we can crowdsource
       | information that can help reduce misinformation. When you have
       | two sides disagreeing the first step is to build some common
       | ground.
       | 
       | Twitter is trying to solve a tough problem. On one hand you've
       | got people accusing Twitter of hosting and platforming hateful,
       | harmful content. On the other hand you have people claiming that
       | Twitter is calling the shots about what's true and suppressing
       | information it doesn't like.
       | 
       | Maybe this is the first step towards something like a digital
       | court. People on both sides present evidence, experts, witnesses.
       | The two sides get a hand in picking the jury.
       | 
       | Or maybe the solvable problem is that information gets
       | misconstrued and propagated. A video clip might get edited a
       | certain way, for example. Solving this problem may not help us
       | all agree on what happened in the video clip. However, we should
       | at least be able to agree on what the two interpretations are. To
       | make this happen, both sides would have to steel man the other
       | side. Otherwise, the opposing side would claim they're being
       | misportrayed. Having things that opposing sides agree upon would
       | greatly help reduce unnecessary conflict.
        
       | anewaccount2021 wrote:
       | This will be a kangaroo court used to blunt-force-trauma a select
       | subset of strawmen and deplorables off of the service. In the
       | end, does anyone really care if Twitter is "truthful"?
        
         | 5AMUR41 wrote:
         | > In the end, does anyone really care if Twitter is "truthful"?
         | 
         | As it's becoming an ever-increasing method of communication
         | between government officials and whatnot, I'd say some amount
         | of "truthfulness" and "validity" is warranted. The other day a
         | member of the US government (I think it was Ted Cruz? I can't
         | find it at the moment) wrote a tweet that implied he thought
         | the Paris Climate agreement only affected the citizens of
         | Paris. Something as factually "write/wrong" as that should be
         | able to be contested and held accountable to being "wrong"
         | 
         | That being said, I agree with you that this will not be used as
         | intended and will undoubtedly lead to more confusion and
         | confirmation bias from all sides. Using crowd-sourced knowledge
         | in this manner leads to the assumption that "what the majority
         | thinks is right is factual," and I'm reminded it was once
         | "fact" that the sun rotated around the Earth
        
       | szhu wrote:
       | There's a lot of reason to doubt that this will work. But one
       | thing makes me hopeful that this might actually work is that
       | Wikipedia's "Talk" pages appear to serve a similar purpose, and
       | they serve that purpose adequately.
        
       | salmonellaeater wrote:
       | > As we develop algorithms that power Birdwatch -- such as
       | reputation and consensus systems
       | 
       | Consensus is the enemy of understanding. For topics where there
       | is conflicting or poor evidence or that bear on the culture war,
       | I do not want people voting on what is the consensus truth. I
       | want to see all the evidence. We have enough problems with
       | researchers not publishing uncomfortable data; I don't want the
       | little that exists flagged because it conflicts with the average
       | Twitter user's sensibilities.
        
         | x86_64Ubuntu wrote:
         | What kind of uncomfortable data are you talking about that
         | isn't being published?
        
         | Bedon292 wrote:
         | Maybe I am interpreting it wrong, but this seems like it
         | provides what you want. It lets people provide whatever
         | evidence is relevant in response. People provide responses to
         | the tweet, with the evidence in the response. And then are
         | rated on if they are helpful or not. Its certainly not perfect,
         | and probably going to be subject to brigading if people don't
         | like the evidence provided though.
        
         | ardy42 wrote:
         | > Consensus is the enemy of understanding. For topics where
         | there is conflicting or poor evidence or that bear on the
         | culture war, I do not want people voting on what is the
         | consensus truth. I want to see all the evidence.
         | 
         | That's a nice sentiment in a vacuum, but we're not in a vacuum.
         | It's sort of like focusing on getting every child a college
         | education when you can't even manage to teach them all literacy
         | yet.
         | 
         | At the point, I feel like the priority needs to be around
         | figuring out a way to defeat disinformation and misinformation
         | to restore some kind of common ground understanding of the
         | facts. The marketplace of ideas can't function if major
         | factions reject truth-seeking in favor of blatant lies that are
         | appealing for various other reasons (e.g. emotional
         | satisfaction, ability to manipulate others, etc.).
        
           | offby37years wrote:
           | Groups search for consensus, individuals for truth. Without
           | consensus groups fracture. Without truth we're destined for
           | the next dark age.
        
           | vorpalhex wrote:
           | We can all agree that chocolate is the best form of ice
           | cream. Those who would disagree are obviously trolls
           | spreading disinformation. The marketplace of ice cream can't
           | function if we have other flavors like pistachio or
           | strawberry being labeled "best" instead of the only truely
           | best ice cream. Obviously these strawberry promoters are just
           | manipulating other gullible ice cream consumers for their own
           | ends.
        
             | ardy42 wrote:
             | > We can all agree that chocolate is the best form of ice
             | cream. Those who would disagree are obviously trolls
             | spreading disinformation. The marketplace of ice cream
             | can't function if we have other flavors like pistachio or
             | strawberry being labeled "best" instead of the only truely
             | best ice cream. Obviously these strawberry promoters are
             | just manipulating other gullible ice cream consumers for
             | their own ends.
             | 
             | You're confusing preferences and facts. I was talking about
             | facts [1], you're talking about preferences.
             | 
             | [1] like Donald Trump is not waging a secret war to defeat
             | a conspiracy of Satan-worshiping pedophiles who harvest a
             | (fictitious) drug from trafficked children and Trump lost
             | the 2020 election fairly.
        
           | bnralt wrote:
           | > The marketplace of ideas can't function if major factions
           | reject truth-seeking in favor of blatant lies that are
           | appealing for various other reasons (e.g. emotional
           | satisfaction, ability to manipulate others, etc.).
           | 
           | The problem always becomes that few think their faction has
           | this problem. Most people think that they and their side
           | believe in the truth, and that the other side is peddling
           | false information. In such an environment, "combating
           | misinformation" inevitably becomes "trying to get everyone to
           | agree with me."
           | 
           | In order to combat misinformation we'd probably have to start
           | by combating the emotional commitment we hold to a certain
           | narrative before we even examine all of the facts. That's not
           | easy to do, but I imagine a good first step would be to cut
           | out the constant 24/7 news cycle that a lot of people are
           | addicted to and that seems to feed into these emotions.
           | Twitter, of course, is a large source of this stuff.
           | 
           | Getting away from the news is an excellent way to start
           | thinking more clearly.
        
             | ardy42 wrote:
             | > The problem always becomes that few think their faction
             | has this problem. Most people think that they and their
             | side believe in the truth, and that the other side is
             | peddling false information. In such an environment,
             | "combating misinformation" inevitably becomes "trying to
             | get everyone to agree with me."
             | 
             | The problem with your line of thinking is that it assumes
             | good faith and a certain amount of competence, which are
             | assumptions that I don't think we can reasonably make
             | anymore in light of the insane success of things like QAnon
             | and "Stop the Steal."
        
           | nailer wrote:
           | No. Truth does not flow from consensus like a college degree
           | follows from literacy.
        
       | username3 wrote:
       | We need to see when a someone on one side disagrees with their
       | own side. Echo chambers drown the voice of dissenters.
       | 
       | We need to see when one side ignores the other side. We need a
       | list of unanswered questions to hold every side accountable.
        
         | freeone3000 wrote:
         | Every side? "The other side"?
         | 
         | Like you're going to hold the Leninists accountable for their
         | unanswered claims against the Breadtubers for today's Voash
         | drama?
         | 
         | Or intervene with a fact check when someone says the 8052 is UV
         | reprogrammable when only the modern clones are?
         | 
         | Or for when the guy who talks about Magic the Gathering and was
         | in Mulan makes an offhand comment about the new Harry Potter
         | series and literally everyone gets upset?
         | 
         | How are you even dividing sides?
        
       | colllectorof wrote:
       | Nearly all attempts to "fix misinformation" on social media I've
       | seen in the last several years ranged from hopelessly clueless to
       | downright sinister.
       | 
       |  _" Birdwatch allows people to identify information in Tweets
       | they believe is misleading and write notes that provide
       | informative context."_
       | 
       | You're not going to fix anything by attacking the symptoms, which
       | is exactly what this seems to propose. To fix the actual problem
       | we need to create systems that generate and propagate trustworthy
       | information, which people actually _want_ to consume rather than
       | attacking information _you_ don 't want people to consume.
       | 
       |  _" we have conducted more than 100 qualitative interviews with
       | individuals across the political spectrum who use Twitter"_
       | 
       | There is already a selection bias in play then, because there are
       | large numbers of people who don't use Twitter for various
       | reasons.
        
         | throwaway894345 wrote:
         | I agree with your characterization about attempts to fix
         | misinformation. However, I don't think "what people actually
         | _want_ to consume " is a good signal for "trustworthy
         | content"-if the social media era has shown us anything, it's
         | that (barring a narrow band of critical, independent thinkers)
         | we have an insatiable appetite for any content that reaffirms
         | our tribalist priors, no matter how patently absurd the content
         | may be.
         | 
         | To the extent that Twitter's goal is to combat post-truth-ism
         | (as opposed to propagating it by way of its current business
         | model), I think it pretty much has to develop its own strict
         | code of ethics which prizes honesty, integrity, neutrality, and
         | objectivity like the academics and journalists of yore.
         | Specifically, we need a media landscape (including social
         | media) that once again rewards both sides for bringing their
         | best arguments, rather than dual partisan media outlets.
         | 
         | There's a very popular straw man counter-argument which is that
         | I'm assuming that abhorrent racist ideas should be given the
         | same attention as, I don't know, climate science, but that's
         | not the case at all. Rather, I'm arguing that a neutralist and
         | objective framework would discourage abhorrent racist ideas and
         | encourage more respectable conservative intellectualism instead
         | of the status quo which is to regard anything to the right of
         | the far-left as uniformly vile (I'm oversimplifying a bit for
         | sake of brevity). Similarly, such a platform wouldn't regard
         | climate science and blank slatism as uniformly virtuous. The
         | best left-wing thought would face-off against the best right-
         | wing thought, and as a moderate liberal, I think the best left-
         | wing thought will win.
         | 
         | Importantly, this pulls everyone toward more fact-based
         | positions, which tends to have a moderating effect.
         | Unfortunately, the corollary is that this presents a political
         | obstacle--those who are deeply committed to these kinds of
         | post-truth ideologies tend to vigorously oppose such reforms.
        
       | dang wrote:
       | There's another active thread here:
       | https://news.ycombinator.com/item?id=25908439. Not sure whether
       | to merge them.
        
       | peanut_worm wrote:
       | Totally thought this was related to the Backyard Bird Count this
       | year lol
        
       | hanniabu wrote:
       | Who fact-checks the fact-checkers?
        
         | md_ wrote:
         | The point here seems not to have independent professional fact-
         | checkers, but to crowd-source it.
         | 
         | On the one hand, I certainly struggle to see how _more_ crowds
         | will improve upon what is fundamentally a problem with crowds
         | to begin with.
         | 
         | On the other, Wikipedia seems to work reasonably well--and
         | purely as a result of the community mores, not due to any
         | sophisticated moderation algorithm or ranking structure or
         | similar.
         | 
         | I'm not holding my breath, but it is interesting to see if a
         | community-driven effort imbued with a Wikipedian-like spirit
         | might succeed where more automated efforts have not.
        
           | ardy42 wrote:
           | > On the one hand, I certainly struggle to see how more
           | crowds will improve upon what is fundamentally a problem with
           | crowds to begin with.
           | 
           | When you only have a hammer, every problem looks like a nail.
           | 
           | I believe that the social networks have built technologies
           | that probably can't actually be made to work in a socially
           | beneficial way, since they're utterly dependent on
           | crowdsourcing and automation for economic reasons, but those
           | tools aren't actually up the the problems they face.
        
           | briantakita wrote:
           | The underlying question is philosophical freedom vs
           | authoritarianism. The same tactics can be utilized by
           | different actors who have different worldviews. The Stasi
           | utilized Crowdsourcing to enforce authoritarianism for
           | example. One person's terrorist is another's freedom fighter.
           | One's misinformation is another's truth particularly when it
           | comes to how information is interpreted & what is focused
           | upon.
        
             | fellowniusmonk wrote:
             | Once a proposition is falsified a rational community would
             | rejected it.
             | 
             | Once your community or protocol "philosophically" rejects
             | checksums it is only a matter of time before your system
             | breaks catastrophically.
             | 
             | This philosophical freedom vs authoritarianism thing you're
             | proposing is a view that is incomplete and small to the
             | point of being false.
             | 
             | The actual underlying question is do you accept that
             | grounded truth exists and are you willing to have a large
             | and nuanced enough view to include all non false
             | propositions and reject all false propositions.
        
           | seqizz wrote:
           | Wikipedia works because you can remove content (even remove
           | accounts afaik) once you're sure it's providing wrong
           | information. On Twitter, people will interpret this as
           | censoring.
        
         | bena wrote:
         | The problem with this statement is that it assumes that "fact-
         | checkers" are a single entity when that is not the case.
         | 
         | The answer to "who fact-checks the fact-checkers" is "the fact
         | checkers".
         | 
         | And we can use consensus, reputation, and checks against known
         | facts to validate the checks themselves. If group A, group B,
         | and group C all agree and group D disagrees, then we can say
         | with relative confidence that D is probably wrong in this case.
         | It is safe to go with the consensus.
         | 
         | However, if A, B, C are notoriously unreliable and have a
         | history of distorting claims, then it is beneficial to see why
         | D disagrees with all three. Or even not use A, B, and C as
         | sources.
         | 
         | We can also judge a fact-checker's veracity by how they
         | evaluate a claim. Do they provide sources for their claim. It's
         | one thing to say "This will murder 40 people a minute" and
         | quite another to say "This claim disagrees with a recent study
         | by the CDC [link to study] that says it will only murder 39
         | people a minute".
         | 
         | The only reason to cast doubt on the process or activity of
         | fact-checking is if you don't want people checking your facts.
        
         | [deleted]
        
       | call_me_dana wrote:
       | On the surface, it sounds really nice. But anyone who's paying
       | attention in the fake news era has a general idea on what this
       | will be used for.
       | 
       | Any tweets questioning the official narrative will be roundly
       | criticized and ridiculed. Instead of having to delve into a long
       | reply thread to see debunkers make their case, there will be an
       | easy to digest notice within grasp. Which is not necessarily a
       | bad thing if it is used fairly and responsibly. But judging on
       | Twitter's past performance, it likely won't be.
       | 
       | Dissent will be publicly humiliated while pure disinformation
       | from governments, think tanks, and corporations have no such
       | objections. Any doubts to the accepted story will be pointed at
       | one of the fact checker sites and further inquiries censored.
       | 
       | In an honest world, this would be one giant step to finally
       | getting at the truth. But this isn't an honest world, is it?
        
         | mc32 wrote:
         | I also don't see how this is not corrupted to cover
         | embarrassing or inconvenient things up by governments who have
         | influence/infiltrate such thing.
        
         | tomjen3 wrote:
         | It don't think it is possible with a format like Twitter - I
         | can imagine something like Wikipedia, but with sources limited
         | to scientific publications, where each person improves upon a
         | shared document.
         | 
         | Maybe Google wave is a better fit, allowing rich non-text items
         | to enter it.
         | 
         | On Twitter you will have each person fight for their side, and
         | you will never see a convergent product.
         | 
         | Of course this answer (and your question) presupposes that we
         | are trying to achieve some form of shared consensus, whereas on
         | Twitter, it is really two or more sides of a culture war
         | fighting and wars are zero or negative sum games. If we are
         | trying to find the truth I benefit even if every theory I had
         | was wrong, because it still helped us get to the one that
         | works, but in a war if I don't win I am sure to lose.
        
         | [deleted]
        
         | beerandt wrote:
         | Popular speech doesn't need protection.
        
           | vlovich123 wrote:
           | That's a pithy retort devoid of content. If 60% of the
           | population believes the truth (for the sake of argument) and
           | 40% believes another, the 40% can still do quite a bit of
           | damage by intentionally spreading misinformation (causing
           | confusion and chaos that the 60% have to distract themselves
           | from) and/or violence (government overthrow doesn't have to
           | come from the majority).
           | 
           | Similarly, it's not hard to imagine that if the minority is
           | the one that's correct, the majority can cause some damage
           | enforcing the belief that the emperor is indeed wearing
           | clothes.
           | 
           | The truth is the thing that needs protection, not speech. The
           | challenge is that objective truth can be epistemologically
           | difficult to protect without protecting lies (or conversely
           | fighting falsehoods can inadvertently end up fighting some
           | truths).
        
             | AnimalMuppet wrote:
             | Worse, if the 40% are more enthusiastic about spreading
             | their viewpoint than the 60% are about spreading the truth,
             | then social media may have a majority of posts supporting
             | the falsehood.
        
             | beerandt wrote:
             | >The challenge is that objective truth can be
             | epistemologically difficult to protect without protecting
             | lies
             | 
             | Which is why even lies are, and should be, protected
             | speech, even if that's not the current trendy thought.
             | 
             | Ultimately what it comes down to is: protecting unpopular
             | minority opinions. If they're not protected, how would it
             | even be possible to evaluate the "truth" of the content.
             | 
             | There is no universal truth. That's why everyone's speech
             | should be protected. Even lies.
             | 
             | Consider: Every civil rights achievement made in the last
             | 100 years. If the minority opinion was squelched without a
             | chance to be considered, what progress would have been
             | made?
             | 
             | Would a "Ministry of Truth" in whatever form it might take,
             | have even allowed statements questioning racial or gender
             | equality?
             | 
             | Any system that enforces such censorship becomes a
             | theocracy dictating dogma.
             | 
             | A statement as simple as "There is no god" would become
             | blasphemous again.
        
         | jerkstate wrote:
         | To be fair, this is the same way all broadcast media has been
         | since the Gutenberg Press was primarily used to print
         | indulgences for the church. We got a good 25 years between the
         | time the internet was relatively widespread before it was
         | completely subverted by the powers that be, that's pretty good.
         | Now onward to the next communication technology, because the
         | internet is now only useful for maintaining the status quo,
         | just like TV, radio, newspaper, and every broadcast medium
         | prior.
        
         | wtetzner wrote:
         | > In an honest world, this would be one giant step to finally
         | getting at the truth. But this isn't an honest world, is it?
         | 
         | Not to mention that in an honest wold, why would you even need
         | something like this?
         | 
         | You don't need it in an honest world, and it will end up being
         | worse than nothing in a dishonest world. It doesn't seem like a
         | good idea in any scenario.
        
           | mistermann wrote:
           | > Not to mention that in an honest world, why would you even
           | need something like this?
           | 
           | The world is what we make it - we are the masters of our own
           | destiny. This is _literally true_ (subject to boundaries
           | imposed on us by nature: the laws of physics, the physical
           | resources available to us, the nature of the evolved human
           | mind and the societies we have built for ourselves to operate
           | in, and other things I may overlook).
           | 
           | If a variable in the world is not to our liking, and it is
           | fixable, we can choose to fix it (as a collective species),
           | or not. Mother Nature imposes some restrictions on this, but
           | not many.
           | 
           | From the article:
           | 
           | > To date, we have conducted more than 100 qualitative
           | interviews with individuals across the political spectrum who
           | use Twitter, and we received broad general support for
           | Birdwatch. In particular, people valued notes being in the
           | community's voice (rather than that of Twitter or a central
           | authority) and appreciated that notes provided useful context
           | to help them better understand and evaluate a Tweet (rather
           | than focusing on labeling content as "true" or "false"). Our
           | _goal(!)_ is to build Birdwatch in the open, and have it
           | _shaped(!)_ by the Twitter community.
           | 
           | > To that end, we're also taking significant steps to make
           | Birdwatch transparent:
           | 
           | - _All(!???)_ data contributed to Birdwatch will be publicly
           | available and downloadable in TSV files
           | 
           | - As we develop algorithms that power Birdwatch -- such as
           | reputation and consensus systems -- we _aim to_ (!) publish
           | _that code_ (!) publicly in the Birdwatch Guide. The initial
           | ranking system for Birdwatch is already available here.
           | 
           | > We hope this will enable experts, researchers, and the
           | public to analyze or audit Birdwatch, identifying
           | opportunities or flaws that can _help us_ (!) more quickly
           | build an _effective_ (!) _community-driven_ (!) solution.
           | 
           | If what they are saying is 100% true (and not _at all_
           | misleading, and remains true going forward through time),
           | this would be an _extremely big deal_. Modifications to the
           | fundamental systems we use for collective communication and
           | sense making is the obvious place a benevolent dictator would
           | start to improve the current state of affairs.
           | 
           | However, history strongly suggests that this is not only not
           | true, but most likely _knowingly untrue_ (aka: a lie). I am
           | obviously speculating, but I think this is a reasonable
           | speculation.
           | 
           | Speculating sucks though. I think we are forced to do it far
           | more often than is necessary (under the limitations imposed
           | upon us by nature).
           | 
           | So how about this idea:
           | 
           | Let's say, in the spirit of uniting the country (USA), we
           | make a bi-partisan decision to create a new role for the
           | government: an as-honest-as-possible, process of constant
           | audit of all "major" public communication platforms.
           | Carefully selected, _proven to be honest and trustworthy_ bi-
           | partisan technical people (from the  "grassroots" community)
           | would be _forcibly_ embedded within all major corporations,
           | with 100% visibility into all source code, processes, and
           | meetings (where  "necessary"). They would carefully monitor
           | _the nature of_ all of this software that is exerting such a
           | powerful force on our society, and that of the world. Where
           | possible (which should be most of the time), their findings
           | would be published for the public to see (and  "sniff for
           | imperfections or corruption").
           | 
           | These would be positions of extreme power and insight, and
           | would offer genuine risks to intellectual property and
           | confidential strategy of the companies subject to this
           | treatment. As _a first pass_ at managing this, these people
           | could be paid _extremely well_ , but they would also be
           | subject to extremely punitive measures if they were to ever
           | behave in a compromising manner.
           | 
           | Of course, the flaws in such a plan _are numerous_. Reality
           | is complex - we can face that head on and manage it, or bury
           | our heads in the sand with _speculative_ claims like  "this
           | wouldn't work".
           | 
           | The general goal of this is to _force(!) truthfulness (as
           | opposed to honesty) into society_. The world is what we make
           | of it, and we can make this little corner of it _how we want
           | it to be_. And if we happen to disagree with the specifics of
           | "how we want it to be", then deal with it head on: _figure it
           | out, and ship to production_.
           | 
           | As I understand it, all politicians desire what is best for
           | the entire country (and the world - they only differ on how
           | to achieve this), as well as desire to govern based on The
           | Truth - so I wouldn't expect we would get any pushback from
           | them (and if we did, journalists would be _on it, publishing
           | salacious exposes on the obvious corruption_ ) - that leaves
           | the decision up to "we the people".
           | 
           | On a scale of 1 to 10, how good/bad is this general idea? Is
           | it a complete non-starter due to something I have overlooked
           | (that _cannot be changed_ )? Are there even better approaches
           | than this? I have no idea, I am just trying to put some ideas
           | out there for consideration. At some point, I think we have
           | to do something to alter the trajectory we are on.
        
             | wtetzner wrote:
             | > As we develop algorithms that power Birdwatch -- such as
             | reputation and consensus systems -- we aim to(!) publish
             | that code(!) publicly in the Birdwatch Guide. The initial
             | ranking system for Birdwatch is already available here.
             | 
             | I think the idea of finding truth through consensus _is_
             | the flaw here. A system like this would have silenced the
             | great revolutionary thinkers of history.
        
       | wtetzner wrote:
       | > People come to Twitter to stay informed
       | 
       | This seems to be the heart of the problem.
        
       | bserge wrote:
       | Introducing Watchbird, a community-based approach to
       | misinformation (heh).
       | 
       | Posters: Get paid to post online, starting at $0.20/post! Our top
       | posters earn up to $50/hour! Join now!
       | 
       | Sponsors: We have over 50,000 active users ready to post whatever
       | you need online, no questions asked!
       | 
       |  _Similar services actually exist, but you know, one could always
       | create a new one aimed at "fact checking" the fact checkers._
       | 
       | I guess my point is, you can't solve this problem with even more
       | crowd bullshit. It needs to be done at a fundamental level,
       | preferably by governments in school.
       | 
       | Afaik, there's still zero official
       | classes/courses/lessons/whatever in most schools that would teach
       | you to not trust everything you read and triple check everything
       | yourself before believing anything.
       | 
       | Plus, this is pretty prone to abuse. Individuals are inherently
       | dangerous, and crowds even more so. Someone doesn't like person
       | X, so they "fact check" his tweets. Others see this coming from a
       | "reputable" poster and jump on the bandwagon.
       | 
       | Seen it so many times it got old. Experimented with it myself. A
       | post on Reddit (same content) that gets 8-12 fake upvotes in the
       | first 30 minutes after being posted is infinitely more likely to
       | start getting upvoted by hundreds of real users and get to the
       | subreddit's hot front page than a post that got 0-3 upvotes, for
       | example.
       | 
       | I was interested in Reddit's voting system and learned some
       | interesting stuff. They're really smart about it, you can't just
       | have multiple accounts and some proxies/VPNs and go at it like in
       | the good old days. Votes are going to be ignored unless you know
       | what you're doing. Probably not news for anyone in the industry,
       | but I found it interesting.
        
       | LargeWu wrote:
       | Would be very interested to see how this addresses one of the
       | primary underlying pathways of misinformation - confirmation
       | bias. Many people believe information because it confirms their
       | worldview; whether it's provably true or false is often
       | irrelevant. In fact, I suspect that having a belief proved wrong
       | often even reinforces that belief in some cases.
       | 
       | How much does truth matter in a post-truth society?
        
         | call_me_dana wrote:
         | Truth still exists whether it's completely hidden. Truth still
         | exists if bad actors have injected 10 other believable stories
         | alongside it. Truth still exists even if every fact checker
         | calls the claim false.
         | 
         | People have spent enormous sums of money and time to hide the
         | truth from the public. Just because every effort has been made
         | to hide the truth, doesn't diminish the fact that the truth
         | matters. In my opinion, it's just the opposite. It matters more
         | than ever.
        
         | tomjen3 wrote:
         | Whether you believe in Covid or not if you inhale the virus you
         | may develop the decease. If you don't believe in guns you may
         | find some nasty people in your house who do, and you will not
         | be able to defend yourself.
         | 
         | The truth still reigns supreme, it is just that people
         | sometimes suffer unnecessarily because they do not know what
         | the truth is.
        
           | throwaway894345 wrote:
           | It's much worse than that--those people who are, shall we
           | say, misaligned with the truth impose consequences _for
           | others_. Especially when those people are in prominent
           | positions in government and culture.
        
         | c22 wrote:
         | I feel like we moved from pre-truth society to post-truth
         | society alarmingly quickly. Perhaps we should dwell for longer
         | in the age of truth.
        
           | [deleted]
        
       | oh_sigh wrote:
       | I suspect this will be minorly used for direct misinformation
       | control("Donald Rumsfeld is not a lizard person"), but mostly
       | used for narrative control("While this fact is true by itself,
       | you need to look at the bigger context..."), and so will be
       | entirely ineffective. People readily take up new information, but
       | are very hesitant to change their internal narrative on a matter.
       | Especially when they're being told by others what their narrative
       | should be. Double especially if part of their narrative is that
       | big tech/liberals/academia/coastal elites/etc are trying to feed
       | you the Big Lie.
       | 
       | Conservative misinformation is a big talking point for liberals,
       | and maybe the big societal issue at the moment, but as a small
       | scale test run, I'd love to see birdwatch try to correct the
       | record for misinformation that is commonly believed in liberal
       | circles: Anti-GMO, anti-vaxx, toxic whiteness, the extents of
       | systemic racism in our society, some of the more dire
       | prognostications of nuclear war and global warming.
        
       ___________________________________________________________________
       (page generated 2021-01-25 23:03 UTC)