[HN Gopher] Decentralised content moderation
       ___________________________________________________________________
        
       Decentralised content moderation
        
       Author : AstroNoise58
       Score  : 104 points
       Date   : 2021-01-14 13:39 UTC (9 hours ago)
        
 (HTM) web link (martin.kleppmann.com)
 (TXT) w3m dump (martin.kleppmann.com)
        
       | colllectorof wrote:
       | It's pretty obvious that the tech crowd right now is so
       | intoxicated by its own groupthink that these attempts to come up
       | with "solutions" are going to have awful results. You don't even
       | know what the problem really is.
       | 
       |  _" I fear that many decentralised web projects are designed for
       | censorship resistance not so much because they deliberately want
       | to become hubs for neo-nazis, but rather out of a kind of naive
       | utopian belief that more speech is always better. But I think we
       | have learnt in the last decade that this is not the case."_
       | 
       | What you should have learned in the last decade is that social
       | networks designed around virality, engagement and "influencing"
       | are awful for the society in the long run. But somehow now the
       | conversation has turned away from that and towards "better
       | moderation".
       | 
       | Engage your brain. Read Marshall McLuhan. The design of a medium
       | is far more important than how it is moderated.
        
         | whoisburbansky wrote:
         | Solidly agree on all points, just wanted to plug a podcast
         | favorite of mine, Philosophize This!, for a less daunting
         | introduction to McLuhan's ideas [1] than "go read a couple
         | hundred pages of fairly dense media theory books."*
         | 
         | 1. https://www.philosophizethis.org/podcast/episode-149-on-
         | medi...
         | 
         | * Of course, I enjoyed the podcast episode so much that I did
         | end up going on to read The Gutenberg Galaxy and The Medium Is
         | the Massage [sic], and wholeheartedly recommend both.
        
           | totemandtoken wrote:
           | Philosophize this is great. If you were a fan of McLuhan, you
           | should read L.M. Sarcasas. His blog
           | (https://thefrailestthing.com/) was how I was introduced to
           | McLuhan, Postman, and other philosophers of media and
           | technology. He has unfortunately shuttered that blog, but he
           | has a substack newsletter thing that talks about similar
           | things: https://theconvivialsociety.substack.com/people/18104
           | 37-l-m-...
        
         | gfodor wrote:
         | It can be both. It seems obvious that a thing like Twitter
         | ought to exist. The social system details, I'd agree, are full
         | of unforced errors that lead to terrible outcomes. But the
         | centralization is a real problem too. We can try to fix both.
        
           | amscanne wrote:
           | It's not obvious to me. I have a fantasy that people will
           | return to thoughtful, long-form blogging and stop trying to
           | impale each other on witty 280 character tweets.
        
             | gfodor wrote:
             | The essential element of twitter is asymmetric following +
             | broadcast at global scale and penetration. Beyond that, I
             | think the social system details are wide open.
        
         | shockeychap wrote:
         | I would add that, broadly speaking, any online system that
         | tries to engage and involve _everybody_ will always become
         | toxic. Forums like those of HN work, in part, because they 're
         | not trying to attract everybody, only those who will contribute
         | thoughtfully and meaningfully.
        
         | timdaub wrote:
         | > The design of a medium is far more important than how it is
         | moderated
         | 
         | IMO this is a great point. Social medias as they exist today
         | are broken because they have been engineered on the assumption
         | to make money on ads. Making money on ads works by engineering
         | around virality, engagement and influencing.
         | 
         | Another thing that McLuhan teaches though is that actually the
         | (social) media is the message. And ultimately this lead to a
         | Viking dude standing in the US capitol.
         | 
         | Now, that whole situation was awful. But it was also hilarious.
         | In social media, this was barely a meme that lived on for a few
         | hours. Whereas within the ancient system of democracy, an
         | intrusion into the parlament is breaking some sacret rules. And
         | there, surely the incident will cast long winding consequences.
         | 
         | To cut to the chase: Social media outcomes have to be viewed
         | wearing a social media hat. Same for real-life. In this case,
         | gladly. Another great case were this was true was Kony 2012,
         | where essentially all the Slacktivism lead to nothing.
        
           | youdontlisten wrote:
           | > 'Social medias as they exist today are broken because they
           | have been engineered on the assumption to make money on ads.
           | '
           | 
           | No, actually, they are created by _intelligence agencies_ ,
           | for the sole purpose of gathering intelligence on everyone in
           | the world. Full stop.
           | 
           | Facebook was founded the very same day the DARPA LifeLog
           | project was 'abandoned.' Look it up.
           | 
           | The 'ad' stuff is just the nonsense they use to 'explain' how
           | these companies somehow sprang up out of nothing overnight
           | and continue to persist today, supposedly by selling billions
           | of dollars of advertising, while you or I couldn't figure out
           | how to even keep the lights turned on selling ads.
           | 
           | All of the 'brokenness' of 'social' media, and of society in
           | general, isn't by accident. It's all by design.
        
         | abathur wrote:
         | They are separate problems.
         | 
         | Engagement is still roughly "our" problem, because ad-driven
         | ~media are externalizing the costs of engagement on society.
         | This is where the Upton Sinclair quote fits.
         | 
         | Moderation is still roughly the platform's problem because it
         | comes with liabilities they can't readily externalize.
         | Engagement certainly overlaps with this, but most of these
         | liabilities exist regardless of engagement.
        
           | Steltek wrote:
           | Engagement _is_ moderation! When FB chooses what to show you,
           | it's already moderating things but simply using a different
           | value system. The're a pushback on "censorship" today but
           | censorship has been happening for years.
        
             | abathur wrote:
             | We may be playing semantics games, here?
             | 
             | Mechanisms that optimize for increased engagement via
             | dynamic suggestions for a user's feed or ~related content
             | are _not_ moderation (unless, perhaps, the algorithmic
             | petting zoo is the only way to use the service).
             | 
             | This is exactly why I'm drawing a distinction.
             | 
             | Many of a platform's legal and civil liabilities for user-
             | submitted content are poorly correlated with how many
             | people see it and whether it is promoted by The Algorithm
             | (though the chance it gets _noticed_ probably correlates).
             | This is ~compliance work.
             | 
             | Their reputational liabilities are a little more correlated
             | with whether or not anyone is actually encountering the
             | content (and more about how the content is affecting
             | people, than its legality). This is ~PR work.
        
         | wombatmobile wrote:
         | And Neil Postman's book, Technopoly
         | 
         | Book Review: Technopoly
         | https://scott.london/reviews/postman.html
         | 
         | Interview with Neil Postman - Technopoly
         | https://www.youtube.com/watch?v=KbAPtGYiRvg
        
         | easterncalculus wrote:
         | Exactly. Shifting the blame from trending pages to posters is
         | just a way to blame users for opinions instead of platforms
         | that spread them as fact. These websites want to be the problem
         | and the solution.
        
         | crmrc114 wrote:
         | "What you should have learned in the last decade is that social
         | networks designed around virality, engagement and "influencing"
         | are awful for the society in the long run."
         | 
         | Yes, and don't forget the 24 hour news cycle with its focus on
         | getting outrage and attention through fear. I did not know who
         | Marshall McLuhan was until now- thanks for the tip!
        
           | ardy42 wrote:
           | > Yes, and don't forget the 24 hour news cycle with its focus
           | on getting outrage and attention through fear.
           | 
           | Yeah, social media is just one in a series of possibly
           | misguided techno-social "innovations," and it probably won't
           | be the last.
           | 
           | My understanding is that groups like the Amish don't reject
           | technology outright, but adopt it selectively based on its
           | effects on their society (and will even roll back things
           | they've adopted if they're not working out). Wider society
           | probably would benefit from a dose of that kind of wisdom
           | right now, after decades of decades of "because we
           | can"-driven "innovation."
        
             | jl2718 wrote:
             | They also force all 17-year-olds to go live with and as
             | 'the English' for two years and then decide whether they
             | want to go back, and what things should be brought back
             | with them.
        
             | swirepe wrote:
             | > My understanding is that groups like the Amish don't
             | reject technology outright, but adopt it selectively based
             | on its effects on their society
             | 
             | I can see how this would work for new things invented
             | outside the Amish community. How does this work for new
             | Amish inventions? How do they judge an effect a thing will
             | have on society while they are still building that thing?
        
         | ssivark wrote:
         | Thanks for hitting the nail on the head. The problems we have
         | are manufactured by the structure of the medium (BigCorp social
         | media optimizing engagement at scale) that we cling to.
         | 
         | For those looking for a relatively accessible introduction to
         | McLuhan's ideas, check out his book "The medium is the
         | message/massage". It's fairly short, and with illustrations,
         | quite readable. I think it has more concrete examples than
         | "Understanding media" which is a more abstract & denser read.
        
       | JulianMorrison wrote:
       | It's worth remembering that content moderation is an activity
       | which causes people mental illness right now, because it's so
       | unrelentingly awful. Attempts to decentralize this are going to
       | be met with the problem that people _don 't want_ to be exposed
       | to a stream of pedophilia, animal abuse, murderous racism and
       | terrorist content and asked to score it for awfulness.
        
         | totemandtoken wrote:
         | This point should be upvoted more. Decentralized moderation
         | means decentralized pain. You can run standard philosophic
         | thought experiments about "utility monsters" but at the end of
         | the day a lot of online content causes real harm, not just to
         | the direct victims but to the spectators. We'd need something
         | like decentralized therapy in tandem with this type of
         | moderation for it to even be considered remotely ethical, and
         | even then I'm very skeptical.
        
         | okokok___ wrote:
         | I agree, I think people underestimate the need to incentivize
         | moderators. This is why I think some kind of cryptocurrency
         | based solution to moderation is necessary.
        
           | JulianMorrison wrote:
           | Moderators don't just need incentives. They need therapy and
           | support and monitoring and breaks and ideally, pre-vetting
           | for strong mental health and no pre-existing bees in their
           | bonnet.
           | 
           | I worry that just making it a way to earn bitcoins risks it
           | becoming one more way for poor people to scrape together
           | pennies at the cost of giving themselves PTSD.
        
             | okokok___ wrote:
             | good point. It's almost like mechanical turk but for
             | moderation.
        
       | iamsb wrote:
       | My suggestion is to not indulge in any content moderation which
       | is not illegal. Only take down content which is required by a
       | court order. Limit use of automated content moderation only for
       | easy to solve cases like child pornography.
       | 
       | Why?
       | 
       | It is fairly clear at this point that content moderation at
       | internet scale is not possible. Why? A. Using other users to flag
       | dangerous content is not working. Which users do you trust to
       | bestow this power with? How do remove this power from them? How
       | do you control it becoming a digital lynch mob? Can you have
       | users across political, gender, other dimensions All mostly not
       | solvable problems.
       | 
       | B. Is it possible to use machine learning? To some extent. But
       | any machine learning algorithm will have inherent bias, because
       | test data will also be produced by biased individuals. Also
       | people will eventually figure out how to get around those
       | algorithms as well.
       | 
       | The causality between content published on the internet and
       | action in real world is not immediate. It is not like someone is
       | sitting in a crowded place and shouting fire causing a stampede.
       | As there is a sufficient delay between speech and action, we can
       | say that the medium the speech is published in is not the primary
       | cause of the action, even if there is link. Chances of direct
       | linkage are fairly rare and police/law should be able to deal
       | with those.
       | 
       | Content moderation, at least the way Twitter has been trying to
       | do, has not been effective, created lot of ways for mobs to
       | enforce censorship, and there is absolutely no real word positive
       | impact of this censorship is. Only use of this moderation and
       | censorship has been for right to claim victimhood and gain more
       | viewer/readership to be honest.
        
         | daveoc64 wrote:
         | So it would be OK for a kids TV show website to have Viagra ads
         | on it?
         | 
         | Edit: I mean spam.
        
           | protomyth wrote:
           | That would be a stupid waste of money for an advertiser.
           | Maybe ad networks are really the problem.
        
           | iamsb wrote:
           | I am only commenting on this in the context of community
           | standards and user generated content and should not be
           | extrapolated to all content in all other contexts.
        
             | daveoc64 wrote:
             | Sorry, I meant spam. If anyone can post a comment, and only
             | "illegal" comments can be removed, that would surely allow
             | a lot of email-style spam.
        
               | iamsb wrote:
               | That is a fair question/comment. In my original comment
               | "Limit use of automated content moderation only for easy
               | to solve cases like child pornography." . It is
               | reasonable to extent that list beyond just child
               | pornography. I did not intend to give impression than
               | this list is exhaustive.
               | 
               | In case of spam - Emails do show you spam emails, just
               | hide them behind a spam folder. So instead of outright
               | removal, it is possible to use similar techniques. And
               | let users decide whether they ever want to see spam
               | comments.
        
               | daveoc64 wrote:
               | > It is reasonable to extent that list beyond just child
               | pornography.
               | 
               | Seems like you're just back to square one there.
               | 
               | You're making a list of unacceptable content. Whatever
               | you put on there, someone's going to disagree.
               | 
               | Whether it's automated or not probably isn't the issue.
               | 
               | I see a lot of comment sections ruined by the typical bot
               | spam (e.g. "I earn $5000 a minute working from home for
               | Google").
        
         | Pfhreak wrote:
         | You realize that the approach you suggest pushes out a
         | different set of people, right?
         | 
         | For example, a soldier with PTSD may want an environment that
         | moderates content. Or a journalist with epilepsy may want a
         | platform where people don't spam her with gifs designed to
         | trigger epilepsy when she says something critical of a game
         | release.
        
           | iamsb wrote:
           | I understand. Most of those can be achieved using privacy and
           | sharing settings though and does not necessarily require
           | content moderation.
        
             | MereInterest wrote:
             | Doesn't that require the active cooperation of bad actors?
             | Sure, you can create a filter to hide all posts tagged with
             | "epilepsy-trigger", but that doesn't help if the poster
             | deliberately omits that tag. Allowing users to tag other
             | people's posts patches this issue, but opens up the system
             | for abuse by incorrect tagging. (E.g. Queer-friendly
             | posters being flagged and demonetized after being
             | maliciously tagged as "sexual content".)
             | 
             | At some point, there needs to be trusted moderation.
        
         | md2020 wrote:
         | I'd point out that child sexual abuse content is not an "easy
         | to solve case". The extent to which Facebook, Google, and more
         | recently, Zoom look the other way on this issue is horrifying
         | and it seems to be a very hard problem due to the laws
         | surrounding the handling of such material. I'm not faulting the
         | laws, I just think this is an inherently hard issue to crack
         | down on. Gabriel Dance and Michael Keller at the NYT did some
         | very high quality reporting on this whole issue in 2019
         | (https://www.nytimes.com/interactive/2019/09/28/us/child-
         | sex-...).
        
           | iamsb wrote:
           | That link is pay/auth-walled for me, but I do get your point.
           | Perhaps easy to solve was underestimate the technical problem
           | as I was more thinking in terms of political problems. From
           | that perspective no one, other than pedophiles themselves,
           | disagrees with removing that kind of content. But completely
           | agree on tech side of it.
        
       | nathias wrote:
       | There is absolutely no difficulty, if you don't want censorship,
       | just have a button that hides content you personally don't want
       | and leave that decision to all individual users. What others wish
       | to see or not is not your decision to make, and if some of them
       | are illegal that should be a job for the police not some ego-
       | triping janny.
        
         | tdjsnelling wrote:
         | Totally agree. Why should someone else I've never met decide
         | what I can and can't see? Leave that decision up to the
         | individual user, and they can tailor their own experience as
         | they desire. Allow them to filter out particular words,
         | phrases, other users and so on.
        
       | jrexilius wrote:
       | Good piece. This line articulates the problem well: "without
       | objectivity and consistency, moderation can easily degenerate
       | into a situation where one group of people forces their opinions
       | on everyone else, like them or not." And it gets to the core of
       | the problem. Objectivity and consistency are extremely difficult
       | to scale and maintain over time. They require constant
       | reinforcement from environment, context, and culture.
        
         | zarkov99 wrote:
         | The central problem here is in what circumstances free people
         | should give up the right to decide what information they can
         | consume. This is not a question that can be answered easily but
         | without first accepting it as the central issue we are not
         | going to make meaningful progress.
        
           | nindalf wrote:
           | That's part of it. The other part is that free people say
           | they don't want to see illegal content (child exploitation
           | imagery, terrorism, scams, sale of opioids etc). The platform
           | needs to moderate to remove that. Then the same users also
           | say they don't want to see legal but distasteful (in their
           | opinion) content like pornography, spam and so on. The
           | platform then has to remove that as well.
           | 
           | For most part platforms take decisions that will suit the
           | majority of users.
        
             | zarkov99 wrote:
             | Well, that should be solvable by giving people much better
             | tools to manage their personal information intake. The crux
             | of the problem is figuring out when it is OK to decide for
             | them what they can or cannot see.
        
             | Viliam1234 wrote:
             | I don't want to see "legal but distasteful" content (and I
             | would also add: annoying, boring, stupid, etc.)... and what
             | I mean is that I don't want it to be shown to me... but I
             | am okay if other people show it to each other.
             | 
             | So instead of some global moderator (whether it be one
             | person, or some complicated "democratic" process) deciding
             | globally what is okay and what is not, I want many bubbles
             | that can enforce their own rules, and the option to choose
             | the bubbles I like. Then I will only be concerned about how
             | to make the user interface as convenient as possible, so
             | that the options are not only hypothetically there, but
             | actually easy to use also by non-tech users.
        
               | zarkov99 wrote:
               | That approach would not address the, in my opinion, valid
               | concerns about speech that is harmful to society overall.
               | There is a real problem here with disinformation and
               | incitement to violence. I do not know that letting the
               | self-virtuous tech sector decide what is or not allowed
               | is the answer, but the problem is real.
        
       | jancsika wrote:
       | > Censorship resistance means that anybody can say anything,
       | without suffering consequences.
       | 
       | I can't even get to the heart of the poster's argument. That's
       | because the shitty state of all current social media software
       | defines "anybody" as:
       | 
       | * a single user making statements in earnest
       | 
       | * a contractor tacitly working on behalf of some company
       | 
       | * an employee or contractor working on behalf of a nation state
       | 
       | * a botnet controlled by a company or nation state
       | 
       | It's so bad that you can witness the failure in realtime on, say,
       | Reddit. I'm sure I'm not the only one who has skimmed comments
       | and thought, "Gee, that's a surprisingly reaction from lots of
       | respondents." Then go back even 30 minutes later and the
       | overwhelming reaction is now the opposite, with many comments in
       | the interim about new or suspicious accounts and lots of
       | moderation of the initial astroturfing effort.
       | 
       | Those of us who have some idea of the scope of the problem
       | (hopefully) become skeptical enough to resist rabbit-holes. But
       | if you have no idea of the scope (or even the problem itself),
       | you can easily get caught in a vicious cycle of being fed a diet
       | of propaganda that is perhaps 80% outright fake news.
       | 
       | As long as the state of the art remains this shitty (and there
       | are _plenty_ of monetary incentives for it to remain this way),
       | what 's the point of smearing that mendacity across a federated
       | system?
        
       | gfodor wrote:
       | You can cut off a large part of abuse by just creating a
       | financial based incentive. Pay to gain access, and access can be
       | revoked at which point you need to repay (perhaps on a
       | progressive scale - ie the more you are banned, the harder it is
       | to get back in.) Your identity confers a reputation level that
       | influences filters so what you post is seen more often, so there
       | is value in your account and you don't want to lose it. The SA
       | forums did this and it helped immensely with keeping out spam
       | (though not a silver bullet.)
       | 
       | Any system where any rando can post any random thing with no
       | gates is going to be much more of a slog to moderate than one
       | where there are several gates that imply the person is acting in
       | good faith.
        
       | halfmatthalfcat wrote:
       | I've been thinking hard about decentralized content moderation,
       | especially around chatrooms, for years. More specifically because
       | I'm building a large, chatroom-like service for TV.
       | 
       | I think it's evident from Facebook, Twitter, et al that human
       | moderation of very dynamic situations is incredibly hard, maybe
       | even impossible.
       | 
       | I've been brewing up strategies of letting the community itself
       | moderate because a machine really cannot "see" what content is
       | good or bad, re: context.
       | 
       | While I think that community moderation will inevitably lead to
       | bubbles, it's a better and more organic tradeoff than letting a
       | centralized service dictate what is and isn't "good".
        
         | Blikkentrekker wrote:
         | The problem is obviously that people very often do not want
         | what they say they want.
         | 
         | When a man says he supports freedom of speech, he isn't
         | thinking about the speech that he wishes to limit as he finds
         | it so abhorrent, and where that line lies differs from one man
         | to the other.
         | 
         | Such initiatives fail, as even when men come together and admit
         | they allow for the most abhorrent of opinions to be censored,
         | they seldom realize that each and every one of them has a very
         | different idea of what that is.
        
           | shuntress wrote:
           | Do women have this problem as well?
        
             | Blikkentrekker wrote:
             | I'm sure you understand both how subsets work and what I
             | intended to convey, no matter how much you dislike how the
             | English language descriptively works.
        
               | shuntress wrote:
               | Do you mean to convey that women are a subset of men and
               | therefor since your statement applies to all men that it
               | also applies to all women?
        
               | Blikkentrekker wrote:
               | That is the usage of the word "man", so noted in every
               | dictionary, and backed up by millennia of descriptive
               | usage.
               | 
               | I'm quite sure you know that too, to be honest. It is
               | something that every English speaker knows, but some act
               | as if they not, because they do not like the actual
               | descriptive, and historical usage of that word.
               | 
               | From _Merriam-Webster_ :                   (1): an
               | individual human           especially : an adult male
               | human
               | 
               | https://www.merriam-webster.com/dictionary/man
        
               | shuntress wrote:
               | I do not see it that way.
               | 
               | But, fair enough. If it has always been that way we
               | should keep it that way.
        
               | skinkestek wrote:
               | IIRC and AFAIK it comes from old norse, madr which used
               | to mean human.
               | 
               | They had separate words for female madr and male madr.
               | 
               | Lately it has been a popular conspiracy that this is
               | because "patriarchy" but the real reason is probably as
               | mundane as simplifications over centuries.
        
               | shuntress wrote:
               | Obviously the evolution of the english language is so
               | long-running and complex that no group could have
               | conspired to implement a deliberate change such as this
               | for whatever reason.
               | 
               | With that said, please consider this:
               | 
               | What type of societal structure would be likely to merge
               | the words for "human" and "male" while leaving the word
               | for "female" separate?
        
               | Blikkentrekker wrote:
               | > _What type of societal structure would be likely to
               | merge the words for "human" and "male" while leaving the
               | word for "female" separate?_
               | 
               | The origin of the secondarily developed semantic meaning
               | of "man" as a "male adult human" is simply because when a
               | man be anything other than an adult male, speakers are
               | statistically more likely to emphatically note it in some
               | way.
               | 
               | Exactly what you did by asking "what of the women?" when
               | it is quite clear from context to anyone what "man" means
               | in this context. "woman" is an overtly marked phrase
               | compared to "man" that arises from a need felt by some
               | speakers to explicitly mark it when a person be female,
               | but not when it be a male.
               | 
               | A similar situation is that white U.S.A. residents are
               | often called "Americans" but for other colors overtly
               | marked phrases such as "African-American" or "Asian-
               | American" are common in parlance, which, due to this
               | behavior, eventually gives further currency to the term
               | "American" as developing a secondary meaning of
               | "specifically white American".
        
               | skinkestek wrote:
               | Would you mind telling us if knowing all this is related
               | to your work or if you've cared to learn it just for fun
               | or something?
               | 
               | (Totally understand if you don't want to.)
        
               | Blikkentrekker wrote:
               | My relationship with historical linguistics, and
               | linguistics in general is that I consider it both
               | interesting and pseudoscientific -- I will also admit
               | that my dislike for the discipline might be colored by my
               | negative experiences in conversing with linguists on a
               | personal level. My interest is otherwise nonprofessional
               | though I minored in it in university.
               | 
               | Especially linguistic psychology is to be taken with a
               | grain of salt -- what I said about how unmarked forms of
               | words acquire secondary meanings is certainly the
               | consensus in that field, but it is not as if it were ever
               | established empirically, nor could it; it is simply an
               | idea that appeals to the intuition and there is nothing
               | obviously wrong with it.
        
               | [deleted]
        
               | Blikkentrekker wrote:
               | No, Old Norse "madr" and English "man" have a common
               | ancestor in the reconstructed common Germanic stem
               | *"mann-"; neither was loaned from the other.
               | 
               | https://en.wiktionary.org/wiki/Reconstruction:Proto-
               | Germanic...
               | 
               | Further analysis upward is more tentative:
               | hypothetically, the word could share a common ancestry
               | with the Latin stem "ment-" for "mind" which would make
               | the semantic origin of *"mann-" to be quite obvious. It
               | is likely related to the Sanskrit stem "manu-" which
               | indeed means either "thinking" when used adjectivially,
               | and "human" when used substantively.
               | 
               | Old English certainly did not loan it from Old Norse and
               | the Old English form of the word "mann", reflects what
               | would be expected if it descended directly from common
               | Germanic.
               | 
               | The Old English words for "male" and "female" were "wer"
               | and "wif" respectively; Old English "wifmann", as in
               | "female human" is what gave rise to modern English
               | "woman", and "wer" almost completely died out and only
               | survives in "werewolf" and "world", the latter a
               | simplification of *"wer-old", as in "a male's lifetime",
               | undergoing significant semantic shift. "wif" is of course
               | also the origin of modern English "wife", but had no
               | implication of marriage in Old English.
        
               | skinkestek wrote:
               | Burns to be wrong but I learned something :-)
        
               | edoceo wrote:
               | "men" in the OG post above meant: all humans. As in "it
               | is the nature of man". As "mankind". Man/men can be used
               | as for two meanings. One is Males and the other is All
               | Humans
        
               | Blikkentrekker wrote:
               | It, as many words can, can be used for more than that.
               | From the top of my head alone:
               | 
               | - a member of the _Homo sapiens_ species.
               | 
               | - a member of the _Homo_ genus
               | 
               | - a male member of any of the two above
               | 
               | - a male adult member of any of the two above
               | 
               | - a brave member of any of the two above
               | 
               | - a soldier
               | 
               | - a commoner
               | 
               | - any vaguely humanoidly shaped being
               | 
               | The word is certainly not special in that regard; I could
               | give a similar list of "Russia" or "chess" for instance
               | -- but seemingly this word sometimes faces objection when
               | the meaning the objector desires not be the one used, and
               | I find that it is invariably a difference of gender lines
               | in meaning if that be the case, and that it is rooted in
               | gender politics.
               | 
               | I have never seen such wilfull denial of descriptive
               | usage of the word "chess" and objections when it is used
               | to, for instance, mean the entire family of games that
               | descend from a common ancestor, or when it is used by
               | metaphor for any exercise in great tactical planning.
               | 
               | The original meaning of the word "man" was the first I
               | gave, all others arose by narrowing or widening it by
               | analogy, much as what happened with "chess".
        
             | edoceo wrote:
             | Yes
        
           | enumjorge wrote:
           | Case in point, alt-right communities who champion "free
           | speech" like Parler and in Reddit often ban people who
           | express opinions they disagree with.
        
             | Blikkentrekker wrote:
             | I was honestly more so thinking about holocaust denial
             | which some consider to abhorrent but others within the line
             | of acceptability.
             | 
             | As in, the things that, some wish, were banned by law.
             | 
             | I don't think many of the _reddit_ moderators so notorious
             | for wanting an echo chamber would advocate it be criminally
             | illegal.
        
             | youdontlisten wrote:
             | Case in point, alt-left communities who champion "free
             | speech" like Hacker News often _shadowban_ people who
             | express opinions they disagree with, especially if it 's
             | said in the wrong 'tone' of voice--direct aggression rather
             | than passive aggressiveness, which is the preferred
             | atmosphere here.
        
               | Blikkentrekker wrote:
               | I'm always suspicious when specifically "left" or "right"
               | or any other specific political colors are mentioned as
               | such with regards to censorship -- it reeks of not being
               | objective.
               | 
               | It is a problem of all mankind, not any particular
               | political color; the only color above it are of course
               | specifically the free speech advocates.
        
             | Mindwipe wrote:
             | Indeed, Parler banned lots of speech. Especially breasts.
        
         | DangerousPie wrote:
         | How do you deal with the problem that the content the current
         | community promotes isn't necessarily the content that is best
         | for your product in the long run (what if they are very
         | dismissive of new members, hampering growth?), or even legal?
        
           | halfmatthalfcat wrote:
           | Per TV show, it's not one big chat room, however there is a
           | default "general" room that is use to that effect. It's very
           | much like Slack/Discord channels. Users are able to
           | essentially create topic rooms within a show.
           | 
           | Think of each TV show as it's own Discord server and within
           | the show are user-generated topic rooms.
           | 
           | My hope is that users basically self-silo into topic rooms
           | that interest them in regards to whatever show their
           | watching.
           | 
           | For example: the Yankees are playing the Marlins. Users can
           | create a #yankees room, a #marlins room, a #umpire room, etc
           | to create chat rooms around a given topic in regards to
           | whatever they're watching. In each room, a user has the
           | ability to block, filter words, etc...so they can tailor
           | their chat experience in whatever way they want while
           | watching any given show.
        
         | yulaow wrote:
         | Honestly after seeing the state of most subreddits (aka they
         | become echo-chamber of whatever the main moderators find "worth
         | of value" and very toxic about whatever is against their idea),
         | community self-moderation seems a total failure.
        
           | Viliam1234 wrote:
           | Most people are stupid. Therefore I would expect most
           | communities to be moderated horribly. How could it be
           | otherwise if they are moderated by stupid people elected by
           | other stupid people? The good part is that people who are
           | better than average can create their own community which will
           | be better than average. How much better, that only depends on
           | those people.
           | 
           | The alternative is having some kind of elite moderators that
           | moderate all communities. It sets a lower bar on their
           | quality. Unfortunately, it also sets an upper bar on their
           | quality. Everything will be as good as the appointed elite
           | likes it, neither better nor worse.
           | 
           | From the perspective of what the average person sees, the
           | latter is probably better. From the perspective that I am an
           | individual who can choose a community or two to participate
           | in, and I don't care about the rest, the latter is better.
        
           | halfmatthalfcat wrote:
           | Reddit still relies on human moderators, admins, etc. What
           | I'm talking about is a purely community driven moderation
           | scheme where, through various algorithms, the community
           | dictates what is and isn't acceptable.
        
             | mminer237 wrote:
             | Wouldn't that just be essentially upvoting and downvoting
             | and deleting downvoted posts?
             | 
             | That was originally the intent of reddit, that people would
             | downvote unconstructive comments, but that quickly turns
             | into the community "moderating" away anything they disagree
             | with, and enforcing an echochamber.
             | 
             | I don't think communities can have enough objectivity to
             | effectively moderate themselves.
        
             | watwut wrote:
             | Parler moderated by showing post to 5 random people and
             | taking their result.
        
         | Cthulhu_ wrote:
         | > I've been brewing up strategies of letting the community
         | itself moderate because a machine really cannot "see" what
         | content is good or bad, re: context.
         | 
         | There's a ton of material on that subject, thankfully; look at
         | news groups, HN itself (flagging), Stack Overflow, Joel
         | Spolsky's blogs on Discourse, etc etc etc. My girlfriend is
         | active on Twitter and frequently joins in mass reporting of
         | certain content, which is both a strong signal, and easily
         | influenced by mobs.
        
           | halfmatthalfcat wrote:
           | Do you have any references to those materials? Would love to
           | read up on some more academic takes.
        
         | wombatmobile wrote:
         | > I think it's evident from Facebook, Twitter, et al that human
         | moderation of very dynamic situations is incredibly hard, maybe
         | even impossible.
         | 
         | How would you know what the evidence tells us from those
         | platforms, when their criteria and resources for moderation are
         | proprietary and opaque?
         | 
         | FB is a profitable company.
         | 
         | Have you calculated how many moderators could be paid $20 per
         | hour out of $15.92 billion profit?
         | 
         | Approximately 400,000.
        
           | techbio wrote:
           | Sounds like a use case for some of those humanities majors.
        
           | mr-wendel wrote:
           | The problem with scaling upward is how incredibly soul-
           | rending the job is. At some level you're basically
           | guaranteeing that some number of people are going to be
           | traumatized.
           | 
           | Maybe at some point the better strategy is to limit public
           | exposure and favor segmenting some groups out into their own
           | space that requires extremely explicit opt-in measures? Hard
           | to say, and tucking it away into its own corner of the web
           | seems rife with its own problems.
           | 
           | As another commenter expressed on some other topic, this is a
           | long-running problem with many incarnations: Usenet, IRC,
           | BBSs, etc. It's become especially salient with the explosion
           | of social media platforms that include everyone from Grandma
           | to Grandson.
           | 
           | Bottom line... my heart goes out to moderators of these kind
           | of platforms.
        
         | repartix wrote:
         | I'd like to know what dang learned moderating HN.
        
         | mountainb wrote:
         | 'Bubbles' are a pejorative way of just saying 'local
         | communities.' Perhaps as a nation we do not benefit from having
         | an international and transparent global community that is
         | always on and always blaring propaganda and manipulation at
         | people.
        
           | davidivadavid wrote:
           | Yup, people have been brainwashed into thinking "bubbles" are
           | a problem when they've been the solution to that problem from
           | the start.
           | 
           | I've suggested that idea a million time, it's all yours for
           | the taking for those who want to implement it:
           | 
           | Build a social network where there is a per-user
           | karma/reputation graph, with a recursive mechanism to
           | propagate reputation (with decay): I like a post, that boosts
           | the signal/reputation from whoever posted, and from people
           | who liked it, and decreases signal from people who downvoted
           | it.
           | 
           | There can be arbitrarily more sophisticated propagation
           | algorithms to jumpstart new users by weighing their first few
           | votes more highly and "absorb" existing user reputation
           | graphs (some Bayesian updating of some kind).
           | 
           | Allow basic things like blocking/muting/etc with similar
           | effects.
           | 
           | This alone would help people curate their information way
           | more efficiently. There are people who post things I know for
           | a fact I never want to read again. That's fine, let me create
           | my own bubble.
           | 
           | The TrustNet/Freechains concepts seem adjacent and it's the
           | first time I come across them -- looks interesting.
        
         | sandworm101 wrote:
         | >> a centralized service dictate what is and isn't "good".
         | 
         | This isn't movie reviews. Good and bad are not the standards.
         | The standard is whether or not something is illegal. When the
         | feds come knocking on your door because your servers are full
         | of highly illegal content, "we let them moderate themselves"
         | will be no defense.
        
           | Mindwipe wrote:
           | "Legal" is not a very good determination.
           | 
           | Discussion of homosexuality is "illegal" in many states. It
           | is a moral imperative for systems to break those laws.
        
           | pjc50 wrote:
           | "Legal" is both local and can change over time. America has
           | been in an unusual situation because it allows far more
           | speech, especially speech right up to the boundary of
           | "incitement to violence".
           | 
           | However, the police have far too much to do, so in practice
           | millions of blatently illegal death threats get sent every
           | day and do not receive any police response. Hence the demand
           | for a non-police response that can far more cheaply remove
           | the death threats or threateners.
        
         | eternalban wrote:
         | Make a cogent and non-authoritarian case for even having a
         | "decentralized content moderation" which doesn't pass the "is
         | this an oxymoron?" smell test.
         | 
         | > I'm building a large, chatroom-like service for TV.
         | 
         | So the profit motive is likely the motivation for applying a
         | "central" doctrine of acceptable discourse using a
         | decentralized mechanism.
         | 
         | > While I think that community moderation will inevitably lead
         | to bubbles
         | 
         | Which allows for e.g. an athiest community to have content that
         | rips religion x's scripture to shred (and why not?) in the same
         | planet that also has a religion-x community that has content
         | that takes a bat to over-reaching rationalism. Oh the horror!
         | Diversity of thought. "We simply can not permit this."
        
         | EGreg wrote:
         | A news and celebrity pundit industry that operates in a
         | capitalist fashion has companies that employ people. They have
         | a profit motive and face market competition. This shapes their
         | behavior. News organizations that tell both sides of a story do
         | not provoke the sensationalism and outrage that news
         | organizations which tell only one side. So they don't get
         | shared as much.
         | 
         | The market literally selects for more one sided clickbaity
         | outrage articles.
         | 
         | Meanwhile social networks compete for your attention and
         | "engagement" for clicks on ads so their algorithms will show
         | you the stories that are the most outrageous and put you in an
         | echo chamber.
         | 
         | It's not some accident. It's by design.
         | 
         | If we were ok with slowing down the news and running it like
         | Wikipedia with a talk page, peer review, byzantine consensus,
         | whatever you want to call it -- concentric circles where people
         | digest what happens and the public gets a balanced view that is
         | based on collaboration rather than competition with a profit
         | motive - our society would be less divided and more informed.
         | 
         | Also, Apple and Google should start charging for notifications,
         | with an exception for real-time calls and self-selected
         | priority channels/contacts signing the notification payload.
         | The practically free notifications creates a tragedy of the
         | commons and ruins our dinners!
        
       | eternalban wrote:
       | _I fear that many decentralised web projects are designed for
       | censorship resistance not so much because they deliberately want
       | to become hubs for neo-nazis, but rather out of a kind of naive
       | utopian belief that more speech is always better. But I think we
       | have learnt in the last decade that this is not the case. If we
       | want technologies to help build the type of society that we want
       | to live in, then certain abusive types of behaviour must be
       | restricted. Thus, content moderation is needed._
       | 
       | Let's unpack this:                  Axiom: a kind of naive
       | utopian belief [exists that asserts] that more speech is always
       | better. But I think we have learnt in the last decade that this
       | is not the case.
       | 
       | False premise. The "naive belief", based on the _empirical_
       | evidence of history, is that prioritizing the supression of
       | speech to address social issues is the hallmark of authoritarian
       | systems.
       | 
       | Martin also claims "we have learned" something that he is simply
       | asserting as fact. My lesson from the last 3 decades has been
       | that it was a huge mistake to let media ownership be concentrated
       | in the hands of a few. We used to have laws against this in the
       | 90s.                  Axiom: By "we" as in "we want", Martin
       | means the community of likeminded people, aka the dreaded
       | "filter bubble" or "community value system".
       | 
       | Who is this "we", Martin?                  Theorem: If we want
       | technologies to help build the type of society that we want to
       | live in, then certain abusive types of behaviour must be
       | restricted.
       | 
       | We already see that the "we" of Martin is a restricted subset of
       | "we the Humanity". There are "we" communities that disagree with
       | Martin's on issues ranging from: the fundamental necessity for
       | freedom of thougth and conscience; the positive value of
       | diversity of thoughts; the positive value of unorthodox
       | ("radical") thought; the fundamental identity of the concept of
       | "community" with "shared values"; etc.                  Q.E.D.:
       | Thus, content moderation is needed.
       | 
       | Give the man a PhD.
       | 
       | --
       | 
       | So here is a _parable_ of a man named Donald Knuth. This Donald,
       | while a highly respected and productive contributing member of
       | the  'Community of Computer Scientists of America' [ACM, etc.],
       | also sadly entertains irrational beliefs that "we" "know" to be
       | superstitious non-sense.
       | 
       | The reason that this otherwise sane man entertains this
       | nonsensical thoughts is because of the "filter bubble" of the
       | community he was raised in.
       | 
       | Of course, to this day, Donald Knuth has never tried to force his
       | views in the ACM on other ACM members, many of whom are devout
       | athiests. And should Donald Knuth ever try to preach his religion
       | in ACM, we would expect respectful but firm "community filter
       | bubble" action of ACM telling Mr. Knuth to keep his religious
       | views for his religious community.
       | 
       | But, "[i]f we want technologies to help build the type of society
       | that we want to live in" -- and my fellow "we", do "we" not agree
       | that there is no room for Donald Knuth's religious nonsense in
       | "our type of society"? -- would it not be wise to ensure that the
       | tragedy that befell the otherwise thoughtful and rational Donald
       | Knuth could happen to other poor unsuspecting people who happen
       | to be born and raised in some "fringe" community?
       | 
       | "Thus, content moderation is needed."
        
       | EGreg wrote:
       | Let's take one step back. Just like in the Title I vs Title II
       | debate, let's go one step earlier. WHY do we have these issues in
       | the first place?
       | 
       | It's because our entire society is permeated with ideas about
       | capitalism and competition being the best way to organize
       | something, almost part of the moral fabric of the country.
       | Someone "built it", now they ought to "own" the platform. Then
       | they get all this responsibility to moderate, not moderate, or
       | whatever.
       | 
       | Compare with science, wikipedia, open source projects, etc. where
       | things are peer reviewed before the wider public sees them, and
       | there is collaboration instead of competition. People contribute
       | to a growing snowball. There is no profit motive or market
       | competition. There is no private ownership of ideas. There are no
       | celebrities, no heroes. No one can tweet to 5 million people at 3
       | am.
       | 
       | Somehow, this has mistakenly become a "freedom of speech" issue
       | instead of an issue of capitalism and private ownership of the
       | means of distribution. In this perverse sense, "freedom of
       | speech" even means corporations should have a right to buy local
       | news stations and tell news anchors the exact talking points to
       | say, word for word, or replacing the human mouthpieces if they
       | don't...
       | 
       | Really this is just capitalism, where capital consists of
       | audience/followers instead of money/dollars. Top down control by
       | a corporation is normal in capitalism. You just see a landlord
       | (Parler) crying about higher landlord ... ironically crying to
       | the even higher landlord, the US government - to use force and
       | "punish" Facebook.
       | 
       | Going further, it means corporations (considered by some to have
       | the same rights as people) using their infrastructure and
       | distribution agreements to push messages and agendas crafted by a
       | small group of people to millions. Celebrity culture is the
       | result. Ashton Kutcher was the first to 1 million Twitter
       | followers because kingmakers in the movie industry chose him
       | earlier on to star in movies, and so on down the line.
       | 
       | Many companies themselves employ social media managers to
       | regularly moderate their own Facebook Pages and comments,
       | deleting even off-topic comments. Why should they have an
       | inalienable right to be on a platform? So inside their own
       | website and page these private companies can moderate and choose
       | not to partner with someone but private companies Facebook and
       | Twitter should be prevented from making decisions about content
       | on THEIR own platform. You want a platform that can't kick you
       | off? It's called open source software, and decentralized
       | networks. You know what they don't have?
       | 
       | Private ownership of the whole network. "But I built it so I get
       | to own it" is the capitalist attitude that leads to exactly this
       | situation. The only way we will get there is if people build it
       | and then DON'T own the whole platform. Think about it!
        
       | lawrencevillain wrote:
       | Obviously upvoting and downvoting is not enough for adequate
       | moderation. There's still the aspect of people trolling and
       | generally posting horrible things online. There's a reason
       | Facebook had to pay $52 million to content moderators for the
       | trauma/ptsd they suffered.
        
       | neiman wrote:
       | We (at Almonit) work on a self-governing publication system which
       | would bring democratic control to content moderation.
       | 
       | We just wrote about its philosophy earlier this week.
       | 
       | https://almonit.com/blog/2021-01-08/self-governing_internet_...
        
         | theylon wrote:
         | This is an interesting concept. Are there any implementations
         | of this?
        
       | kstrauser wrote:
       | I'm active with Mastodon and absolutely love its moderation
       | model. In a nutshell:
       | 
       | - It's made up of a bunch of independent servers, or "instances".
       | The common analogy here is to email systems.
       | 
       | - If you want to join the federation, stand up an instance and
       | start using it. Voila! Now you're part of it.
       | 
       | - My instance has a lot of users, and I don't want to run them
       | off, so it's in my own interest to moderate my own instance in a
       | way that my community likes. Allow too much in without doing
       | anything? They leave. Tighten it so that it starts losing its
       | value? They leave. There's a feedback mechanism that guides me to
       | the middle road.
       | 
       | - But my users _can_ leave for greener pastures if they think I
       | 'm doing a bad job and think another instance is better. They're
       | not stuck with me.
       | 
       | The end result is that there are thousands of instances with
       | widely varied moderation policies. There are some "safe spaces"
       | where people who've been sexually assaulted hang out and that
       | have zero tolerance for harassment or trolling. There are others
       | that are very laissez faire. There's a marketplace of styles to
       | choose from, and no one server has to try to be a perfect fit for
       | everyone.
       | 
       | I realize that this is not helpful information for someone who
       | wants to run a single large service. I bring it up just to point
       | out that there's more than one way to skin that cat.
       | 
       | (That final idiom would probably get me banned on some servers.
       | And that's great! More power to that community for being willing
       | and able to set policies, even if I wouldn't agree with them.)
        
         | makeworld wrote:
         | Couldn't have said it better myself. Mastodon appears to solve
         | all the moderation problems I've seen raised about social
         | media.
        
       | [deleted]
        
       | jonathanstrange wrote:
       | Since I'm using libp2p in Go for a side project, may I take this
       | opportunity to ask how this could work in principle for a
       | decentralized network? The way I see it, this seems to be
       | impossible but maybe I'm missing something.
       | 
       | For example, in my network, anyone can start a node and the user
       | has full control over it. So how would you censor this node? The
       | following ideas don't seem to work:
       | 
       | 1. Voting or another social choice consensus mechanism. Problems:
       | 
       | - Allows a colluding majority to mount DOS attacks against
       | anyone.
       | 
       | - Can easily be circumvented by changing host keys / creating a
       | new identity.
       | 
       | 2. The equivalent of a killfile: Users decide to blacklist a
       | node, dropping all connections to it. Problems:
       | 
       | - Easy to circumvent by creating new host keys / creating a new
       | identity.
       | 
       | 3. Karma system: This is just the same as voting / social choice
       | aggregation and has the same problems.
       | 
       | 4. IP banning by distributing the blocked IPs with the binaries
       | in frequent updates. Problem:
       | 
       | - Does not work well with dynamic IPs and VPNs.
       | 
       | Basically, I can't see a way to prevent users from creating new
       | identities / key pairs for themselves whenever the old one has
       | been banned. Other than security by obscurity nonsense ("rootkit"
       | on the user's machine, hidden keys embedded in binaries, etc.) or
       | a centralized server as a gateway, how would you solve that
       | problem?
        
         | JulianMorrison wrote:
         | The way it works in Mastodon is that (1) not everyone runs a
         | node, but there are many nodes, and they each have their own
         | policies and can kick users off, and (2) nodes can blacklist
         | other nodes they federate content from.
         | 
         | This two level split allows node operators to think of most
         | other users at the node level, which means dealing with far
         | fewer entities. It provides users with a choice of hosts, but
         | means that their choice has consequences.
        
         | yorwba wrote:
         | > I can't see a way to prevent users from creating new
         | identities / key pairs for themselves whenever the old one has
         | been banned.
         | 
         | You could prevent banned users from returning with a new
         | identity by disallowing the creation of new identities. E.g.
         | many Mastodon instances disable their signup pages and new
         | users can only be added by the admins.
         | 
         | If you don't want to put restrictions on new identities, you
         | could still treat them as suspect by default. E.g. apply a kind
         | of rate limiting where content created by new users is shown at
         | most once per day and the limit rises slowly as the user's
         | content is viewed more and more without requiring moderation.
         | (This is a half-baked idea I had just now, so I'm sure there
         | are many drawbacks. But it might be worth a shot.)
        
         | Ajedi32 wrote:
         | I've thought about this a lot. Currently, my preferred solution
         | to the problem of Sybil attacks in decentralized social
         | networks is a reputation system based on a meritocratic web of
         | trust.
         | 
         | Basically it would work something like this: By default,
         | clients hide content (comments, submissions, votes, etc)
         | created by new identities, treating it as untrusted (possible
         | spam/abusive/malicious content) unless another identity with a
         | good reputation vouches for it. (Either by vouching for the
         | content directly, or vouching for the identity that submitted
         | it.) Upvoting a piece of content vouches for it, and increases
         | your identity's trust in the content's submitter. Flagging a
         | piece of content distrusts it and decreases your identity's
         | trust in the content's submitter (possibly by a large amount
         | depending on the flag type), and in other identities that
         | vouched for that content. Previously unseen identities are
         | assigned a reputation based on how much other identities you
         | trust (and they identities _they_ trust, etc.) trust or
         | distrust that unseen identity.
         | 
         | The advantage of this system is that it not only prevents sibyl
         | attacks, but also doubles as a form of fully decentralized
         | community-driven moderation.
         | 
         | That's the general idea anyway. The exact details of how a
         | system like that would work probably need a lot of fleshing out
         | and real-world testing in order to make them work effectively.
        
       | emaro wrote:
       | Matrix published an interesting concept for decentralised content
       | moderation [0]. I think this is the way to go.
       | 
       | Edit: Discussed here [1] and here [2].
       | 
       | [0]: https://matrix.org/blog/2020/10/19/combating-abuse-in-
       | matrix...
       | 
       | [1]: https://news.ycombinator.com/item?id=24826951
       | 
       | [2]: https://news.ycombinator.com/item?id=24836987
        
       | AstroNoise58 wrote:
       | I find it pretty interesting that Martin does not mention the
       | kind of community member-driven up/downvote mechanism found on
       | this site (and elsewhere) as an example of decentralised content
       | moderation.
       | 
       | Edit: now I see Slashdot and Reddit mentioned at the end in the
       | updates section (I don't remember seeing them on my first read,
       | but that might just be me).
        
         | maurys wrote:
         | He mentions Reddit at the end of the article, which is close
         | enough in mechanism to Hackernews.
        
         | totemandtoken wrote:
         | "We vote on values, we bet on beliefs" - Robin Hanson.
         | 
         | Voting tells us what we value but that doesn't mean what is
         | good for us. It also treats all content as somewhat equivalent,
         | which isn't true. A call to (maybe violent) action isn't the
         | same thing as sharing a cute cat video.
        
         | mytailorisrich wrote:
         | Up/downvote mechanisms always end up as agree/disagree votes.
         | 
         | Moderation is not the same. It is not about agreeing but
         | curating content that is not acceptable (off-topic, illegal,
         | insulting).
         | 
         | Article quote: " _In decentralised social media, I believe that
         | ultimately it should be the users themselves who decide what is
         | acceptable or not_ "
         | 
         | In my view that is only workable if that means users define the
         | rules because, as said above, I think 'voting' on individual
         | piece of content always leads to echo chambers and to censoring
         | dissenting views.
         | 
         | Of course this may be fine if within an online community focus
         | on one topic or interest, but probably not if you want to
         | foster open discussions and a plurality of views and opinions.
         | 
         | We can observe this right here on HN. On submissions that are
         | prone to trigger strong opinions downvotes and flagging
         | explode.
        
         | Steltek wrote:
         | How would up/down votes work on a decentralized platform?
         | Wouldn't it be easy to game by standing up your own server and
         | wishing up a legion of sockpuppets?
         | 
         | There's a whole Moonshot of spam resistance that's going to
         | need to happen in Mastodon/Matrix/Whatever.
        
           | nine_k wrote:
           | Decentralized networks need trust, and trust is not a Boolean
           | value.
           | 
           | With a centralized service, trust is simple: how much you
           | trust the single entity that represents the service.
           | 
           | In a distributed network, nodes need to build trust to each
           | other. In the best-known federated network, email, domain
           | reputation is a thing. Various blacklists and graylists pass
           | around trust values in bulk.
           | 
           | So a node with a ton of sock puppets trying to spam votes (or
           | content) is going to lose the trust of its peers fast, so the
           | spam from it will end up marked as such. A well-run node will
           | gain considerable trust with time.
           | 
           | This, of course, while helpful, does not _guarantee_
           | "fairness" of any kind. If technology and people's values
           | clash, the values prevail. You cannot alter values with
           | technology alone (even weapon technology).
        
           | dboreham wrote:
           | The problem you describe is called Sybil resistance and is
           | known to be hard, but there are some example working systems
           | such as Bitcoin.
        
         | commandlinefan wrote:
         | FTA: "even though such filtering saves you from having to see
         | things you don't like, it doesn't stop the objectionable
         | content from existing". He doesn't want upvoting/downvoting, he
         | wants complete eradication (of whatever the majority happens to
         | objects to right now).
        
           | zozbot234 wrote:
           | On the other hand, decentralised filtering out of
           | objectionable content might go hand-in-hand with replicating
           | and thus preserving the most valuable content. Empirically,
           | 90% of the content in most decentralized systems (e-mail, the
           | Web etc.) is worthless spam that 99.99999% of users or more
           | (a rather extreme majority if there ever was one) will never
           | care about and could be eradicated with no issues whatsoever.
        
           | throwaway2245 wrote:
           | > of whatever the majority happens to objects to right now
           | 
           | I don't see that Martin Kleppmann is using 'democracy' to
           | mean 'majoritarianism' here. He makes considered points about
           | how to form and implement policies against harmful content,
           | and appears to talk about agreement by consensus.
           | 
           | Democracy and majoritarianism are (in general) quite
           | different things. This might be more apparent in European
           | democracies.
        
             | Viliam1234 wrote:
             | He plays a little trick by saying "ultimately it should be
             | the users themselves who decide what is acceptable or not".
             | This has two meanings, somewhat contradictory.
             | 
             | The straightforward meaning is that ultimately I decide
             | what is acceptable or not for me, and you decide what is
             | acceptable or not for you. We can, and likely will, have a
             | different opinion on different things.
             | 
             | But the following talk of "governance" and "democratic
             | control" suggest that the one who ultimately decides are
             | not users as individuals, but rather some kind of process
             | that would be called democratic in some sense. Ultimately,
             | someone else will make the decision for you... but you can
             | participate in the process, if you wish... but if your
             | opinions are too unusual, you will probably lose anyway...
             | and then the rest of us will smugly congratulate ourselves
             | for giving you the chance.
             | 
             | > Democracy and majoritarianism are (in general) quite
             | different things.
             | 
             | Sure, a minority can have rights as long as it is popular,
             | rich, well organized, able to make coalitions with other
             | minorities, or too unimportant to attract anyone's
             | attention. But that still means living under the potential
             | threat. I don't see a reason why online communities would
             | have to be built like this, if instead you could create a
             | separate little virtual universe for everyone who wished to
             | be left alone... and then invent good tools for navigating
             | these universes, to make it convenient, from user
             | perspective, to create their places, to invite and be
             | invited, and to exlude those who don't follow the local
             | rules (who in turn can create their own places and compete
             | for popularity).
        
               | throwaway2245 wrote:
               | > The straightforward meaning is that ultimately I decide
               | what is acceptable or not for me, and you decide what is
               | acceptable or not for you.
               | 
               | I disagree that this is straightforward in meaning. Even
               | if I do have a good idea of what is unacceptable to me, I
               | need someone external to screen for that. If the point is
               | to avoid personally facing the content that I find
               | unacceptable, it's impossible for me to adequately
               | perform this screening on my own behalf.
               | 
               | I can instruct or employ someone (or something) to do
               | this, but then ultimately they will make the decision for
               | me. It's only plausible to do this at scale, unless I'm
               | wealthy enough to employ my own personal cup-bearer who
               | accepts the harm. So, it makes sense to band together
               | with other users with similar requirements.
               | 
               | Your claim seems to be that delegating these decisions is
               | a bad thing that should be avoided, but it is an
               | essential and inevitable part of this service - I _have
               | to_ delegate that decision to someone else, or I won 't
               | get that service.
               | 
               | This is not to mention legal restrictions on content in
               | different jurisdictions, which define a minimum standard
               | of moderation and responsibility, that may include
               | additional risk wherever they are not fully defined.
        
               | webmaven wrote:
               | _> I can instruct or employ someone (or something) to do
               | this, but then ultimately they will make the decision for
               | me. It 's only plausible to do this at scale, unless I'm
               | wealthy enough to employ my own personal food-taster, so
               | it makes sense to band together with other users with
               | similar requirements._
               | 
               | And here we run into the issue that economists and
               | political scientists call "the Principal-Agent
               | problem"[0].
               | 
               | Whether we're talking about the management of a firm
               | acting in the interests of owners, elected officials
               | acting in the interests of voters, or moderators of
               | communication platforms acting in the interest of users,
               | this isn't a solved problem.
               | 
               | And in fact, that last has extra wrinkles since there is
               | not agreement on just whose interests the moderator is
               | supposed to prioritize (there can be similar disagreement
               | regarding company management, but at least the
               | disagreement itself is far better defined).
               | 
               | This is deeply messy, and as hard as it is now, it is
               | only going to get worse with every additional human that
               | is able to access and participate in these systems.
               | 
               | [0] https://en.m.wikipedia.org/wiki/Principal%E2%80%93age
               | nt_prob...
        
           | nine_k wrote:
           | This is the crux of censorship. If anything, it hinges on
           | hubris: the censor assumes to know which content deserves to
           | exist at all.
           | 
           | The need for censoring content still exists because certain
           | kinds of content are deemed illegal, and failure to remove
           | that may end up in serving jail time.
           | 
           | On the other hand, _moderation_ is named very aptly.
           | 
           | That said, I fully support the right of private companies to
           | censor content on their premises as they see fit. If they do
           | a poor job, I can just avoid using their services.
        
             | InsomniacL wrote:
             | -Devils Advocate
             | 
             | > I fully support the right of private companies to censor
             | content on their premises as they see fit.
             | 
             | Those private companies don't have the right to censor
             | content on their premises 'as they see fit' without giving
             | up protections afforded to them in law as 'platforms'. The
             | question is at what level of moderation and/or bias do they
             | become a 'publisher', not a 'platform'.
             | 
             | > If they do a poor job, I can just avoid using their
             | services.
             | 
             | Issues arise when the poor job spills over outside their
             | service. As an example, The people who live around the US
             | Capitol endangered by pipe bombs in part because of
             | incitement organised on Twitter.
        
               | nine_k wrote:
               | They don't have the common carrier protections. That is,
               | phone companies are not required to censor hate speech,
               | and ISPs are not required to censor unlawful content that
               | passes their pipes, because they are just, well, pipe,
               | oblivious of the bytes they pass.
               | 
               | Platforms are in the business of making content
               | _available_ , so they are forced to control the
               | availability, and censor unlawful content. They choose to
               | censor any "objectionable content" along the way, without
               | waiting for PR attacks or lawsuits. I can understand
               | that.
               | 
               | (What is harder for me to understand is when these same
               | platforms extoll the freedom of expression. I'd like them
               | be more honest.)
        
               | chrisoverzero wrote:
               | > Those private companies don't have the right to censor
               | content on their premises 'as they see fit' without
               | giving up protections afforded to them in law as
               | 'platforms'.
               | 
               | Not only do they, but there's no such thing as
               | "protections afforded to them in law as 'platforms'": "No
               | provider or user of an interactive computer service shall
               | be treated as the publisher or speaker of any information
               | provided by another information content provider."
               | 
               | > The question is at what level of moderation and/or bias
               | do they become a 'publisher', not a 'platform'.
               | 
               | This idea of "publisher vs. platform" has been entirely
               | made up by people with no understanding of the state of
               | the law. [1] "Bias" doesn't play into it - they can do
               | what they want, in good faith, on their website. Hacker
               | News (via its moderators) has a bias against low-effort
               | "shitposting" and posts which fan racial flames. It's so
               | frequent and well-known that it could become a tagline,
               | "Hacker News: Please Don't Do This Here". At what level
               | of curation of non-flamey posts does it become a
               | publisher due to this bias?
               | 
               | [1]: https://www.eff.org/deeplinks/2020/12/publisher-or-
               | platform-...
        
           | mytailorisrich wrote:
           | Moderation is not upvoting/downvoting.
           | 
           | For example, when you moderate a debate you do not silence
           | opinions you disagree with, you simply ensure that people
           | express themselves within 'acceptable' boundaries, which
           | usually means civility.
           | 
           | To me this means that 'decentralised content moderation' is
           | largely an utopia: Whilst the rules may be defined by the
           | community, letting everyone moderate will, in my view, always
           | end up being similar to upvoting/downvoting which is a vote
           | of agreement/disagreement.
        
         | meheleventyone wrote:
         | Isn't it just an example of democratic content moderation? We
         | up vote, down vote and flag content. We don't get the ability
         | to do so unless we are a community member of some tenure. It's
         | augmented by centralized moderation by a handful of moderators.
         | 
         | How well it works is always a topic here.
        
           | randompwd wrote:
           | > Isn't it just an example of democratic content moderation
           | 
           | A democracy makes great efforts to ensure 1 person = 1 vote.
           | Online platforms do not.
        
             | [deleted]
        
           | freeqaz wrote:
           | We don't see all of the countless hours spent by mods like
           | dang to keep the quality high. It's a thankless job most of
           | the time!
        
             | meheleventyone wrote:
             | Having moderated some large forums in the past I know!
        
       ___________________________________________________________________
       (page generated 2021-01-14 23:02 UTC)