[HN Gopher] Is content moderation a dead end?
       ___________________________________________________________________
        
       Is content moderation a dead end?
        
       Author : ksec
       Score  : 98 points
       Date   : 2021-04-13 18:10 UTC (4 hours ago)
        
 (HTM) web link (www.ben-evans.com)
 (TXT) w3m dump (www.ben-evans.com)
        
       | nxpnsv wrote:
       | I'd love seeing an anonymous peer review approach, to post
       | anything you need to review X other posts. Until there are N
       | accepts posts are invisible. I think it could work, but I am sure
       | HN can tell me how I am wrong :)
        
         | Viliam1234 wrote:
         | In other words, before posting you need to randomly click on X
         | other posts, and to make your posts visible, you need to have N
         | accounts. I believe I could write such script in an afternoon.
        
       | jaredwiener wrote:
       | On a little bit of a corollary --
       | 
       | "Who gets to decide what's true?" is the wrong question. We
       | should be asking "how do we determine what's true?"
       | https://blog.nillium.com/fighting-misinformation-online/
        
         | numpad0 wrote:
         | Dreadful as it sounds, maybe the truth truly don't matter. You
         | don't have to live fully anchored down in the baseline reality,
         | just your worldview has to be able to do more than sustain you.
         | 
         | Maybe it's okay to be waist deep into QAnon schizophrenia so
         | long as the rest of your life is also okay, and vice versa.
         | Though those imaginations aren't the kind of powdered grape
         | juice to me.
        
         | ozymandium wrote:
         | Who is this "we"?
        
       | jonnycomputer wrote:
       | A system that limits the number of posts individuals can make on
       | a per-article or per-diem basis would go a long way to silencing
       | overly strident voices which, out of their sheer verbosity and
       | pertinacity make it appear that their (often) extreme views are
       | more prevalent and widely accepted than they in fact are.
       | 
       | An alternative is to generate an in-forum currency that can be
       | spent on comments, either on a per-post or per-word basis. This
       | currency could be earned based on reputation--but as we see here
       | on HN, and many other places, upvotes does not always go to the
       | most thoughtful and engaging comments--or some other metric
       | (statistical distinguish-ability of posts from the corpus of
       | posts? not sure).
        
       | gumby wrote:
       | > Anyone can find their tribe, ...but the internet is also a way
       | for Nazis or jihadis to find each other.
       | 
       | I'm delighted that the net is a way for nazis and other jihadis
       | to find each other. It's a global honeypot. Driving them
       | underground doesn't make them go away, just harder to find. We
       | saw this with the January 6th crowd.
       | 
       | We also saw this with the publicity-seeking attorneys general who
       | got rid of craigslist hookups and back page: sex trafficking
       | continues but is harder to find and prosecute.
        
         | watwut wrote:
         | Driving them underground makes them weaker and less
         | influential. I don't need them to be known. I want them weak.
        
       | justbored123 wrote:
       | This is extremely short sighted. The complete opposite is more
       | likely to be true. Privacy is almost dead and soon it will be
       | almost impossible to hide your real identity on the internet and
       | thus avoid consequences for your actions. That will allow
       | companies to black list you across the internet, so if you are an
       | a-hole in lets say Facebook by harassing people, doxing them, do
       | child grooming, scams/fake advertisement, etc. and Facebook bans
       | you because you are bad for advertisement, they are going to be
       | able to put you on a black list and ban you in all other sites
       | even if you use a different IP, browser, account, etc. There are
       | endless ways to tell you are the same person and it's getting
       | worst, for starters you phone, SSO, browser/extensions
       | fingerprinting etc.
       | 
       | It's going to be a lot like your credit score, your criminal
       | record, etc.
       | 
       | At the end of the day companies want advertisement money and if
       | you scare the adds away the same networks that control those adds
       | are going to end up keeping track of you to keep you away.
       | 
       | Once that anonymity is completely gone, the internet will be just
       | like real life. If you are a problematic assh*le, you'll get a
       | record saying just that that employers, land lords, schools, etc.
       | are going to check, just like they do now with credit scores,
       | criminal records and school records and if you don't behave, you
       | are going to be a marginal banned from polite society like it
       | happens in real life outside the internet.
        
         | proc0 wrote:
         | > That will allow companies to black list you across the
         | internet
         | 
         | Thus giving full control of people to private entities that are
         | increasingly not held accountable by anything or anyone.
         | 
         | Why would we want a private entity, not elected by the people,
         | to decide our morality? Enforcing morality is dangerous and
         | arguably immoral since it uses force to align people's
         | thinking, which treats everyone like children that are learning
         | instead of adults with freewill.
        
           | gumby wrote:
           | I assumed that the GP poster wasn't endorsing this...but
           | perhaps they were?
        
             | proc0 wrote:
             | I meant it as an observation, I'm not sure if OP was
             | condoning it or not. Either way, that would be the reality
             | of it, with such entities wielding so much control.
        
           | sigstoat wrote:
           | > Why would we want a private entity, not elected by the
           | people, to decide our morality?
           | 
           | i got the impression from the separation of church and state
           | that we don't generally want elected officials deciding what
           | is or is not moral.
           | 
           | merely what is legal.
        
           | asciident wrote:
           | Like the credit bureaus? Seems like they've been doing this
           | for decades already.
        
             | stevesimmons wrote:
             | Credit bureaus are also highly regulated
        
             | proc0 wrote:
             | Yeah pretty much I think. We're definitely there as far as
             | financial institutions, I just hope we're not on our way
             | there with the communication and social institutions
             | (increasingly dominated by the Internet).
        
             | jasonfarnon wrote:
             | How do credit bureaus "enforce morality"? I think the point
             | is that the effect of a bad credit rating is much more
             | limited than a universal blackout on the internet. In fact
             | strictly more limited, to the extent that many creditors
             | rely on social media in making their decisions.
        
               | asciident wrote:
               | I strongly disagree. Bad credit (sometimes even wrongly
               | attributed) can block you from jobs, mobile plans, bank
               | accounts, credit/debit cards, renting, etc. I'd rather be
               | blocked from Facebook than be told I can't rent an
               | apartment or be disqualified from a job.
        
             | marcusverus wrote:
             | This is very different. Credit bureaus are amoral. They're
             | just gathering data and doing math.
             | 
             | GP is talking about a world where you can't shop on Amazon
             | because you committed wrongthink on Twitter.
        
               | wombatpm wrote:
               | Oh Amazon will take your money, you just won't be able to
               | write product reviews
        
               | asciident wrote:
               | That's a naive view of credit bureaus. There are value
               | judgments in there throughout the stack. You can't get a
               | job because you didn't pay off a medical debt. You can't
               | get a mortgage because you don't have the right history
               | of past debt (for example, if you're too young).
        
         | Viliam1234 wrote:
         | Today an algorithm can mistakenly throw my legitimate e-mails
         | into Spam folder. Tomorrow, it will be able to throw _me_ into
         | Spam folder.
        
         | ALittleLight wrote:
         | I wonder if, as a result, we will have as many thoughtful and
         | interesting conversations with strangers on the internet as we
         | do in real life.
        
         | teddyh wrote:
         | That is a nightmarish vision for a _lot_ of non-"assh*le"
         | people:
         | 
         | https://geekfeminism.wikia.org/wiki/Who_is_harmed_by_a_%22Re...
        
         | [deleted]
        
       | paxys wrote:
       | Agree with the author that moderating every interaction on a
       | social network is a fool's errand. I'd go a step further and say
       | that the future isn't simply restricting some features like links
       | and replies, but rather more closed networks where entry is
       | guarded (think PC software downloads -> app stores) and only a
       | very limited set of specialized actions and interactions is
       | allowed (think app sandboxing).
        
       | minikites wrote:
       | I think content moderation can be effective in smaller
       | communities where social norms can be formed and effectively
       | enforced. Perhaps the problem is that Facebook and Twitter are
       | too large to be allowed to exist?
        
       | miki123211 wrote:
       | I think we're looking at the issue in a completely wrong way.
       | 
       | There's no objective definition of right or wrong in content
       | moderation. Right and wrong is subjective, especially across
       | cultures, and moderation should be subjective too.
       | 
       | I believe end users should have the choice to adopt blocklists,
       | Adblock style. Those lists could contain single posts, accounts,
       | or even specific words. A lot of content (like flashing images or
       | spoilers) does not merit deleting, but there are users with good
       | reasons not to see it. They should be given such an option.
       | 
       | There should be a few universal, built-in blocklists for obvious
       | spam, phishing, child porn etc, but all the rest should be
       | moderated subjectively.
       | 
       | A Clubhouse-stule invite system (with unlimited invites) would
       | also be a good idea. It would make it much harder for spammers,
       | cammers and troll farms to make new social media accounts.
        
         | avs733 wrote:
         | [obvious disclaimer of I am NOT advocating for child porn]
         | 
         | Why would spam, phishing, child porn be the 'universal' ones?
         | 
         | If you are making an argument that it should all be opt
         | in...then it should all be opt in. Otherwise, this is the same
         | drawing of a moral line that we all tend to do where we call
         | ours obvious and others subjective. Maybe some people want the
         | spam? Shouldn't the spammers have the ability to share it in
         | case people want it?
         | 
         | My point isn't to argue _for_ those things...its to say if we
         | just accept that content moderation is subjective...we can 't
         | then label some things as subjective and some things as not,
         | the framework should just be - laws of the
         | state/country/equivalent structure. Those provide mechanisms
         | (theoretically but more soundly) for feedback that corporations
         | do not and frankly should not on the boundaries of acceptable
         | and unacceptable content.
         | 
         | C.f., Nazi imagery in most of Europe.
        
         | croutonwagon wrote:
         | I think there a fine line between these and curated lists
         | (which most would use) that create echo chambers and
         | comfirmation bias (which many platforms have).
         | 
         | I like the methods this site uses over others quite frankly and
         | that seems to be quite a bit of human moderation to get to it.
         | But i think scale also has something to do with it. Reddit was
         | much like this site in its earlier days.
        
         | rocqua wrote:
         | We used to have editors at newspapers who did this. They had an
         | opinion, tried to be objective, but called out excesses.
         | 
         | Sometimes this went wrong, see yellow journalism. One thing
         | that is different now is that you no longer have a say in who
         | edits your newsfeed. You can't very easily switch news-feed.
         | 
         | I feel like the variation in opinions in modern-day editors is
         | much smaller than the variation in opinions in society. Or
         | maybe it isn't but the editing, being done more implicitly, is
         | not convincing the wider populace that this is the way. That
         | is, content moderators have much less authority (in the sense
         | of respect) than old newspaper editors.
        
           | [deleted]
        
         | alex_g wrote:
         | Clubhouse is not a great example of a platform that handles
         | abuse properly.
         | 
         | Putting the moderation burden on people is also not a solution,
         | it's duct tape.
        
           | seriousquestion wrote:
           | The rooms aren't created by Clubhouse, so it makes sense for
           | the creator of the room to moderate it according to the goals
           | of their particular room. It's not a burden because it's not
           | an open forum like HN or Reddit, where anyone can talk. The
           | moderators have to specifically choose who gets to speak and
           | can simply drop them back to the audience if a problem.
        
             | alex_g wrote:
             | Yeah but if the goal of the room is to foment hatred and
             | target harassment at an individual who isn't in the room,
             | that's still abuse.
        
               | andrew_v4 wrote:
               | Does this apply to a phone call? To a zoom call with five
               | people in it? Should trying to stop someone from doing
               | something, even if abhorrent, in private, really be a
               | priority? I can only see this ending badly. If there is a
               | link to real world crimes, sure, intercepting a
               | discussion is one method available to detect / deter /
               | deny / disrupt, just like a wiretap. But (and maybe I
               | misunderstand your comment) beginning with the idea that
               | we should try and prevent conversations we find abhorrent
               | from happening is, well, abhorrent.
        
               | alex_g wrote:
               | No, and that's not what I said.
               | 
               | Clubhouse is not just private phone calls. It's a social
               | network.
        
               | [deleted]
        
           | simmanian wrote:
           | >Putting the moderation burden on people
           | 
           | I don't think GP is saying we should put the moderation
           | burden on people. When you accept that there is no objective
           | definition of right or wrong, you begin to see that perhaps
           | there are ways for people to self-organize on the internet
           | according to their values rather than being shoved into the
           | same box like we often do today. Many people are looking for
           | ways to efficiently and effectively organize on the internet
           | in a more sustainable manner.
        
             | groby_b wrote:
             | And yet, HN operates on a (not "the") definition of right
             | and wrong. Stepping outside the boundaries gets you a visit
             | from the moderation fairy, and might end with you being
             | ejected.
             | 
             | That means the burden is not on "people" in the sense of
             | individuals. You can expect a certain content and tone
             | coming to HN because the moderators ensure that. Yes,
             | they're people too, but not in the sense PP and GP used it.
             | 
             | That was clearly an "each individual user should..."
             | statement - and that's likely unsustainable for large user
             | groups.
        
               | andrew_v4 wrote:
               | Your HN example makes me think: if I'm talking to my
               | spouse or close friends, we obviously don't have a
               | moderation policy, we know each other well and share
               | values well enough that any debates we may have are
               | (almost always) focused on substance and not on conduct.
               | 
               | In political discourse, and in debates on big platforms
               | like Twitter, it's the opposite- most of the discussion
               | is about people's or groups' conduct and substance takes
               | a back seat. Because a heterogeneous group with different
               | values is involved.
               | 
               | So for social media and online forums, the question is,
               | how big and diverse can the audience get while still
               | supporting civil, substance focused discussion? HN does a
               | pretty good job, and also has some obvious biases, scared
               | cows, and weak spots. Online newspaper article comments
               | probably have one of the lowest quality of discourse for
               | a given participant size. What forum is best, I'm not
               | sure, but its instructive to look at it this way, because
               | it reflects politics generally, if we want to address
               | real issues while maximizing participation.
        
             | alex_g wrote:
             | "Right or wrong" maybe not, but for well managed
             | communities on the internet, there are objective
             | definitions for appropriate and inappropriate, based on
             | shared values and context.
             | 
             | If you leave it up to each individual to decide what is
             | appropriate or inappropriate, and provide them with the
             | tools to block content they consider inappropriate, that's
             | a burden on them, because you're not taking care of it at
             | the community level.
             | 
             | And if the community's strength comes from shared values,
             | and you leave that up to each individual to decide, what's
             | shared, and what sort of "community" is actually offered?
        
               | Zak wrote:
               | You and the toplevel commenter may be talking about two
               | different kinds of systems.
               | 
               | You are describing "well managed communities". HN is
               | arguably one of those. Many topic-specific forums, IRC
               | channels, mailing lists, and communities on platformy
               | things that seek to reinvent those are as well. They tend
               | to be centered around a topic or purpose and have rules,
               | guidelines, and social norms that facilitate that
               | purpose.
               | 
               | I think the toplevel comment is talking about global
               | many-to-many networks where people connect based on both
               | pre-existing social relationships and shared interests
               | (often with strangers). Those require a different model,
               | and centralized moderation based on a single set of rules
               | is probably not the best one.
        
               | rocqua wrote:
               | > based on shared values and context.
               | 
               | That's exactly the point GP was trying to make. That
               | people should be able to organize in groups of shared
               | values and context. Rather than there being a rather
               | large rough mono-culture of moderation policies.
        
             | watwut wrote:
             | One big party of issue is people deliberately going out of
             | way to harass those they want to leave. And it is not new
             | tactic, or was going on for years.
        
       | anyfoo wrote:
       | Obviously not, since we're discussing it on a site with content
       | moderation right now that, in my opinion, works much better than
       | sites without.
        
         | proc0 wrote:
         | It's only under control, I think, because it's very specific in
         | terms of content. The weakness many social networks have is
         | that it's basically an open platform for any discussion, and
         | that makes it harder to put boundaries around. It makes sense
         | to heavily restrict subject matters on sites with specialized
         | content but general social networks are still facing the issues
         | mentioned in the article, IMHO.
        
           | dang wrote:
           | I agree that being specific about content makes things
           | easier, but HN is not so specific. Anything intellectually
           | interesting is on topic
           | (https://news.ycombinator.com/newsguidelines.html), and that
           | makes for a lot of long, difficult explanations - e.g.:
           | 
           | https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so.
           | ..
        
             | intended wrote:
             | Greater institutions have fallen; thats when we got the
             | term Eternal September. It just takes the right influx of
             | users to overcome the human moderators.
        
             | proc0 wrote:
             | Sure, maybe it's not that constrained, but consider
             | something like memes, which is expected of many sites, but
             | definitely not here.
        
           | Guest19023892 wrote:
           | I think when the content becomes too broad then tribalism
           | becomes more apparent as people start to form separate groups
           | within the community. This creates a lot of drama as the
           | tribes are forced to be under one roof.
           | 
           | When the content is more specific, like PC master race, or
           | people that drive VW bugs, then the community identifies
           | itself as a single tribe, and they tend to treat each other
           | well.
        
         | yesOkButt wrote:
         | HN is hardly as diverse a site to moderate as YouTube and
         | Reddit
         | 
         | Should all forums rules conform to HN?
        
           | dang wrote:
           | Could you please stop creating accounts for every few
           | comments you post? We ban accounts that do that. This is in
           | the site guidelines:
           | https://news.ycombinator.com/newsguidelines.html.
           | 
           | You needn't use your real name, of course, but for HN to be a
           | community, users need some identity for other users to relate
           | to. Otherwise we may as well have no usernames and no
           | community, and that would be a different kind of forum. https
           | ://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
        
           | Etheryte wrote:
           | Au contraire, I'd argue HN is a very diverse site but it's
           | the moderation by both users and moderators that makes it
           | seem orderly and uniform. There are many stories that get
           | flagged a-la blogspam, low quality, clickbait etc, and same
           | with comments. Those that don't have merit by the community
           | don't reach the wider audience. I think the conception that
           | HN is uniform is a somewhat common misunderstanding, as
           | evidenced by discussions on supposedly-trolls and the like:
           | it isn't usually the case that someone is shilling or
           | trolling, more often than not it's just someone with a
           | different world view.
        
             | watwut wrote:
             | There are very few meaning of word diverse that HN conforms
             | to. Education, profession, gender, hobbies, location, all
             | tend to be from one cluster.
        
             | tqi wrote:
             | I think HN moderation works because it is relatively
             | obscure and several orders of magnitude smaller than
             | FB/Twitter/etc. I wonder long do it would take for HN to be
             | completely overwhelmed if a large subreddit decided to come
             | in and start trolling/spamming?
        
             | nullserver wrote:
             | HN is very heavy on Academics / Developers / Engineers.
             | 
             | Very different demographics then world at large.
        
           | tomcam wrote:
           | Downvoted for sophistry and complete lack of addition to the
           | discourse. Or do you happen to do a better job moderating a
           | site elsewhere?
        
           | anyfoo wrote:
           | I personally don't think your comment puts the effectiveness
           | of moderation is in question, just the efficiency. So yes, I
           | think places like YouTube and Reddit would greatly benefit
           | from similar moderation (the rules don't have to be exactly
           | the same), but the difference in scale and, as you note,
           | variability in the rules for different parts of those sites
           | makes it so much harder to apply.
        
             | yowlingcat wrote:
             | It's worth noting that IMO, the scale is not just volume
             | (depth) but also cultural heterogeneity (breadth). HN is
             | for all intents and purposes a single community with a
             | stated set of values. It's not a constellation of
             | communities some of which are polar opposites of one
             | another. The question of moderating Reddit always boils
             | down to /which subreddit/ -- I don't even know where to
             | start with YouTube.
        
               | [deleted]
        
               | kodah wrote:
               | > HN is for all intents and purposes a single community
               | with a stated set of values
               | 
               | In the sense that we all value curiosity, yes.
               | 
               | We are all also very different. When political posts go
               | up you see it; value, perspective, economic, and
               | educational differences are all highlighted
               | simultaneously.
               | 
               | I think what keeps us from destroying each other is that
               | the binds that bond are curiosity, and those bonds are
               | strong enough for now. dang also sacrifices his sanity
               | going around and nudging people back in the right
               | direction.
               | 
               | Those bonds are nothing to be understated though. I've
               | tolerated some edgy opinions on this website, and
               | probably given some too, but I also come here to learn
               | about different perspectives and genuinely do enjoy them
               | even if some offend or hurt me. Other people see those
               | differences and talk about making lists.
        
         | endisneigh wrote:
         | As far as forums go HN is very low volume.
        
         | yakubin wrote:
         | Another site with moderated comments is Ars Technica. It works
         | out great for them.
         | 
         | On the other side there is unmoderated Phoronix, which has the
         | worst comment section that I've ever seen.
        
           | tinus_hn wrote:
           | Is it worse than YouTube comments? I have never looked at it
           | but that has to be something..
        
             | seriousquestion wrote:
             | YouTube comments are actually pretty good these days, I
             | find.
        
       | wpietri wrote:
       | Sigh. A "certain level of bad behaviour on the internet and on
       | social might just be inevitable" has been an excuse since the
       | beginning. I worked on the first web-based chat, bianca.com, and
       | I heard it back then. More recently, I worked on anti-abuse at
       | Twitter a few years back and hear the same talking point. Now I
       | work on the problem at a not-for-profit, and it's still a talking
       | point. Ignoring that the social media landscape has shifted
       | dramatically over the decades, as have the technologies and our
       | understanding of the problem.
       | 
       | It was always a terrible point, but it's especially ridiculous to
       | see techno-utopians turn techno-fatalists in an eyeblink. The
       | same people will go right from "innovation will save the world"
       | to "I guess progress has now stopped utterly". And what they
       | never grapple with is _who_ is bearing the brunt of them giving
       | up. I promise you it 's not venture capitalist and rich guy Ben
       | Evans who will be experiencing the bulk of bad behavior. It's
       | easy enough for him to sacrifice the safety of others, I suppose,
       | but to me it seems sad and hollow.
        
         | lifeisstillgood wrote:
         | Ok, so you have a lot of experience of this subject - would you
         | mind suggesting your preferred approach(es) to moderation? What
         | can work?
        
         | alexvoda wrote:
         | It is not that surprising having gone through this ebb and flow
         | myself.
         | 
         | All utopias and utopic dreams rely too much on human nature
         | being entirely good.
         | 
         | The comunist utopia relied on the goodness of the people in
         | government. The capitalist utopia relied on the goodness of the
         | entrepreneurs. Online communities and communities in general
         | rely on the goodness of the members.
         | 
         | The reality is human nature contains both good and bad. And as
         | a utopist, being faced with pure destructiveness like you are
         | in content moderation is demoralizing.
        
         | benedictevans wrote:
         | I did think I'd made it extremely explicit that I don't think
         | any of that at all, but perhaps not (although the way you throw
         | in a rather childish ad hominem sneer suggests you're not
         | thinking very clearly). What I actually wrote is that though
         | (of course!) there will always be some bad behaviour, we want
         | to minimise it (I compare it to malware, for heaven's sake),
         | but moderation might not be the best way to minimise it, and we
         | might need different models.
         | 
         | As it happens, I would suggest that the idea that somehow we
         | CAN just stop all of bad human behaviour online would be the
         | most extreme techno-utopianism possible.
        
           | wpietri wrote:
           | > ad hominem [...] you're not thinking very clearly
           | 
           | Huh. Not totally sure you understand what "ad hominem" means.
           | 
           | But moving on from that, I'm not objecting to the notion that
           | we might need different models. The way we do anything today
           | is unlikely to be the best way for all of time. Given that
           | I've spent years trying to improve things, perhaps you can
           | take it as read that I think we can improve things.
           | 
           | What I'm objecting to is your fatalism that bad shit is
           | probably going to happen to somebody (a note you include in
           | your closing paragraph) combined with your failure to examine
           | exactly who's going to bear the brunt of it. Something you
           | conspicuously didn't do in your reply here, instead
           | suggesting it's some sort of shocking rudeness to point out
           | that as a rich person, it's unlikely to be you.
        
       | tunesmith wrote:
       | I see it as kind of a funnel. First you decide how much
       | participation you want to allow in the first place, and that's a
       | big lever. Smaller niche communities are easier to moderate
       | because a lot of policies are customs, meaning you don't need to
       | make them explicit.
       | 
       | Another lever is related - decide how much you want to limit the
       | _kind_ of content. A like button is easier to moderate than
       | upvote /downvote, which is easier than a poll response, which is
       | easier than restricted markup, which is easier than allowing
       | unsanitized html/js/sql. (I think there's a lot of unexplored
       | territory between "poll response" and "restricted markup", in
       | terms of allowing people to participate with the generation of
       | content.)
       | 
       | Then there is distributing the moderation abilities themselves.
       | Can users become moderators or only admins? Is there a reputation
       | system? I miss the kuro5hin reputation system and would like to
       | see more experiments along those lines.
       | 
       | And then finally you get to the hard stuff, the arguments about
       | post-modernism and what truth is, creating codes of conduct,
       | dealing with spirit-vs-letter and bad faith arguments, etc.
       | Basically the "smart jerk" problem. I hate that stuff. I want to
       | believe something simple like, as soon as you have a smart jerk
       | causing problems, it means you've given them too much opportunity
       | and should scale back, but I think it's not that simple.
        
       | motohagiography wrote:
       | It's hard to separate content moderation from the problem of
       | Evil. Low entropy evil is easy to automate out, high entropy and
       | sophisticated evil can convince you it doesn't exist.
       | 
       | This is also the basic problem of growing communities, where you
       | want to attract new people while still providing value to your
       | core group, while still managing both attrition and predators.
       | What content moderation problems have proven is that even with
       | absolute omniscient control of an electronic platform, this is
       | still Hard. It's also yields some information about what Evil is,
       | which is that it seems to emerge as a consequence of incentives
       | more than anything else.
       | 
       | In the hundreds of forums I've used over decades, the best ones
       | were moderated by starting with a high'ish bar to entry. You have
       | to be able to signal at least this level of "goodness," and it's
       | on you to meet it, not the moderators to explain themselves.
       | There is a "be excellent to each other" rule which gives very
       | reasonable blanket principle powers to moderators, and it's
       | pretty easy to check. It also helped to take a broken windows
       | approach to penalizing laziness and other stupidity so that
       | everyone sees examples of the rules.
       | 
       | Platform moderation is only hard relative to a standard of purity
       | as well, and the value of the community is based not on its
       | alignment, but on its mix. If you are trying to solve the
       | optimization problem of "No Evil," you aren't indexed on the
       | growth problem of "More Enjoyable." However, I don't worry too
       | much about it because the communities in the former category
       | won't grow and survive long enough to register.
        
         | pdonis wrote:
         | _> In the hundreds of forums I 've used over decades, the best
         | ones were moderated by starting with a high'ish bar to entry._
         | 
         | I've had the same experience. And at the other end of the
         | spectrum, the reason Facebook, Twitter, etc. have such problems
         | with moderation is that there is _no_ bar to entry--anyone can
         | sign up and post. With what results, we see.
        
       | wolverine876 wrote:
       | People with moderating experience:
       | 
       | Why not just delete [most] offending comments, immediately, no
       | questions asked (and ban repeat offenders)? For maybe 95% (as a
       | wild guess), there's no question - it's clear that the comment is
       | inflammatory or disinformation or whatever. It surprises me that
       | I see so many of them permitted in so many forums, even here on
       | HN. Why tolerate them? One click and move on.
       | 
       | Tell people about the policy, of course, and if the comment is
       | partly offending and partly constructive, delete it. They can re-
       | post the constructive part. It's not hard to behave - we all do
       | it in social situations. If you want your comment to be retained,
       | don't do stupid stuff.
       | 
       | ------------
       | 
       | Also, it's telling IMHO that in this conversation among people
       | relatively sophisticated in this issue, organized disinformation
       | is barely discussed. It's well-known, well-documented, and
       | common, yet we seem to close our eyes to it. It's a different
       | kind of moderation challenge.
        
         | kartoshechka wrote:
         | Then people will get offended for getting censored out?
        
           | wolverine876 wrote:
           | Some will, but is that a loss? If you aim at accommodating
           | destructive behavior, you'll have it and attract more of it.
           | If you aim at accommodating constructive behavior, you'll
           | have it and attract more of it. I'd happily let some other
           | sites have the entire market of destructive behavior.
           | 
           | But we are just speculating; can someone with actual
           | experience and expertise say how it would work?
        
         | marshmallow_12 wrote:
         | I think a clever mod will realize that they are not the Supreme
         | Court. They are certainly not the best judge out there. Just
         | clicking accounts and comments out of existence wont solve
         | misinformation - it will just make the mod a tyrant. I know for
         | a fact if i were moderating, there would be few safe users.
        
           | wolverine876 wrote:
           | > I think a clever mod will realize that they are not the
           | Supreme Court. They are certainly not the best judge out
           | there. Just clicking accounts and comments out of existence
           | wont solve misinformation - it will just make the mod a
           | tyrant.
           | 
           | We're not talking about prison and the law of the land; we're
           | making decisions about the disposition of some comments on an
           | Internet forum. Far more consequential decisions are made
           | without any due process - for example, managers decide on
           | whether people will keep their jobs; they are 'tyrants'.
        
         | RobertRoberts wrote:
         | > "Why not just delete offending comments..."
         | 
         | How do you define offending? Some people are offended by a
         | great many things. China is offended if you point out they have
         | human rights abuses. The US is offended if you point out their
         | interference in other country's affairs. Thailand literally
         | makes it illegal (jail time) to offend some people that in the
         | US is not only legal, but encouraged by our culture!
         | 
         | It's not so simple or easy.
        
           | jonnycomputer wrote:
           | No it is easy. You define offending as whatever the moderator
           | finds offending. Like a strike in baseball.
           | 
           | But you can always moderate the moderators.
        
           | wolverine876 wrote:
           | IMHO this argument is potentially interesting
           | philosophically, if someone has something new to say. It's an
           | appeal to the logical extreme of post-modern relativism (and
           | when we see logical extremes, I believe it's a good question
           | to ask - is this a real problem or just philosophical). It
           | also is misleading, IMHO, because it conflates morality,
           | offensiveness, and power. Regardless, these kinds of
           | philosophical arguments can be continued indefinitely, but so
           | is the one that says the Internet is a figment of my
           | imagination. I'm talking about reality.
           | 
           | In reality, the human mind doesn't need and very rarely uses
           | the extreme of hard and fast algorithms; we are not
           | computers. I can judge good from bad, constructive from
           | destructive, etc. and it is easy to identify most of the
           | problematic comments. When it's a forum with rules, it's easy
           | to identify (again, a wild guess) 95% of them.
        
       | foxhop wrote:
       | I disagree with the sentiment and conclusions drawn in this post.
       | Moderation is not dead, here is my public response:
       | 
       | https://www.remarkbox.com/remarkbox-is-now-pay-what-you-can....
        
       | tomcam wrote:
       | We are lying to ourselves, and we are doing it through
       | colonialization writ large, once again. People who use Twitter
       | and Facebook seem to be completely unaware that thousands of
       | content moderators in the Philippines are being subjected to
       | images of utter depredation and cruelty by the minute because we
       | refuse to take responsibility ourselves. History will not look
       | well on what we did to these these heroic underpaid people. I do
       | not blame Mark Zuckerberg, whom I despise. He is doing this with
       | our full consent.
       | 
       | In my view the only proper way to handle content moderation is
       | that every user of these "free" social media platforms over the
       | age of 18 should should be required to moderate some proportion
       | every month to understand what's actually going on.
        
         | bjt2n3904 wrote:
         | > "we refuse to take responsibility ourselves"
         | 
         | I'm to be responsible for what someone else's views are?
         | 
         | Nonsense. This is the cry of the censorship apologist. This
         | moderation draft you speak of... what guidelines will the
         | draftees follow? Will they moderate out of the goodness of
         | their hearts? Or will they follow some standard? (I'm sure many
         | people would be extremely eager to author! You mean I get to
         | decide what is permissible to discuss online? What a wonderful
         | avenue to advance my political causes by force!)
         | 
         | The author is right. The solution is not to double down on
         | moderation.
        
         | cryptoz wrote:
         | > In my view the only proper way to handle content moderation
         | is that every user of these "free" social media platforms over
         | the age of 18 should should be required to moderate some
         | proportion every month to understand what's actually going on.
         | 
         | Does that mean no vetting at all of the moderators? Anybody can
         | become a moderator? But then you have QAnon in large numbers
         | moderating content on like the CNN Facebook page or something?
         | I really, really, really don't think that is a "proper" or even
         | tenable moderation solution.
         | 
         | There are too many people who would abuse the moderation power.
         | Moderation should at least be a paid position, paid well in
         | fact, and vetted before allowed to moderate. Otherwise it will
         | be worse than before.
        
           | tomcam wrote:
           | Those are great points. I believe that there should be
           | essentially no censorship at all, subject to First Amendment
           | restrictions. Posts could be hidden to people based on age,
           | political, or other preferences, but would always be
           | accessible to adult users willing to sign a waiver.
           | 
           | I believe very much that bad material no matter how
           | disgusting is best handled through public exposure, not
           | censorship.
        
             | mdoms wrote:
             | How do you deal with misinformation? Anti vax
             | misinformation could literally devastate an entire society
             | if given enough oxygen. Are your free speech ideals more
             | important than the health of an entire economy and hundreds
             | of thousands of lives at risk?
        
               | jwlake wrote:
               | This reminds me of a line from an NPR story. It said that
               | "false" information spread twice as fast and twice as far
               | as "true" information on twitter.
               | 
               | No interrogation of why it spread was investigated. It
               | immediately made me wonder, maybe people just thought
               | "false" information was a whole lot funnier, and hence
               | sharable.
               | 
               | The problem with misinformation is it vaguely doesn't
               | exist. There's parody, non-orthodoxy, true things people
               | disagree with for political reason, unproven things,
               | urban myths, rumors, etc etc. These are all classes of
               | information people blame all the ills of society on.
        
               | mcphage wrote:
               | > The problem with misinformation is it vaguely doesn't
               | exist.
               | 
               | You don't think that people lie on the internet?
        
             | cryptoz wrote:
             | The first amendment protects you from hosting content you
             | don't want to host. Hosting providers must be allowed to
             | remove content they don't want there or their rights are
             | directly violated.
             | 
             | Do you think that a site should be forced to host other
             | people's vile content?
        
         | engineeringwoke wrote:
         | > thousands of content moderators in the Philippines
         | 
         | And those are good jobs for people. This is honestly a
         | ridiculous argument. It's simply that technologists don't want
         | to pay for content moderation so they are arguing that it isn't
         | necessary because it is "Sisyphean", which curiously enough
         | raises their margins. It couldn't be more cynical.
        
           | pizza wrote:
           | I tried googling whether these are good jobs and it seems
           | mixed. Even with good pay it's gotta have some impact on your
           | psyche to spend the day flagging, among more mundane content,
           | the occasional dick pics and beheadings?
        
           | benedictevans wrote:
           | I argued that content moderation probably isn't the answer
           | and we need to something else. I really don't know how anyone
           | could possibly read what I wrote and believe I was saying we
           | shouldn't do anything. Frankly, I struggle to see that as
           | anything other than deliberate bad faith.
        
       | 6510 wrote:
       | > Microsoft made it much harder to do bad stuff, and wrote
       | software to look for bad stuff.
       | 
       | Before MS there was no bad stuff on the commodore 64. It just
       | didn't exist. Loading things from tapes, disks or the internet
       | doesn't matter. You switch it off and on again then load the next
       | thing. I see no reason why this cant scale. You would have
       | problems if you allow unchecked io and remote code execution and
       | you would have to deal with that but even then a simple reset
       | would clean it up. There is no need to give strangers the keys to
       | your home and offer them a place to hide where you cant find
       | them.
       | 
       | > Virus scanners and content moderation are essentially the same
       | thing - they look for people abusing the system
       | 
       | The problem is that it is not your page. This forces you to live
       | up to someone else's standards (if not forein laws) It is like
       | the PC architecture where the computer is not yours.
       | 
       | Facebook is really what web standards should have offered. I
       | would probably have been against it myself but in hindsight it is
       | what people really wanted.
       | 
       | >...content moderation is a Sisyphean task, where we can
       | certainly reduce the problem, but almost by definition cannot
       | solve it.
       | 
       | I don't know, perhaps we can. Should we want to?
       | 
       | > I wonder how differently newsfeeds and sharing will work in 5
       | years
       | 
       | Ill still be using RSS and Atom.
        
       | alex_g wrote:
       | This is a weird take.
       | 
       | You get lots of messages from Nigerian scammers, but the solution
       | was not to prevent people from writing freeform emails. The
       | solution was to build powerful spam detection algorithms, make it
       | easy for people to classify emails to help strengthen the
       | training set, and the problem is basically solved.
       | 
       | There's no easy answer to content moderation. There's no one size
       | fits all solution, nor is there is some weird hack that's going
       | to fix it. It's a part of your product. If you treat it as such,
       | you're better off.
       | 
       | If you treat it as a separate problem that just needs money
       | thrown at it or duct tape wrapped around it, you're never going
       | to stop throwing money and tape at it.
       | 
       | Everyone wants an easy way out. You need _everyone_ on your team
       | in the room brainstorming solutions.
        
         | kiba wrote:
         | The problem isn't solved, just surpassed, given that Nigerian
         | spammers continue to make money.
        
           | alex_g wrote:
           | Fair, though I'm curious if the people falling for them are
           | using email providers with quality spam filters. I'd guess
           | it's a much older crowd that's more likely using an archaic
           | email provider with no incentive to improve spam filters.
        
             | pdonis wrote:
             | _> I 'm curious if the people falling for them are using
             | email providers with quality spam filters._
             | 
             | I'm curious why the people falling for them need a spam
             | filter to recognize them as scams. I still see an
             | occasional one slip through my email provider's spam
             | filter, and I've never had any problem figuring out that
             | they were scams.
        
               | alex_g wrote:
               | Because not everyone has competent internet skills.
        
       | cblconfederate wrote:
       | Not just content moderation but also app moderation. And
       | moderation has gone hand-in-hand with vertical integration which
       | is bad for innovation. Soon facebook will be writing people's
       | posts for them (because you can't trust people with keyboards)
       | and apple will be delivering a computer together with the
       | software soldered in a SoC. Both solutions will be bad for
       | innovation though, they 'll be making a very fast horse, but both
       | will miss the next big thing.
        
       | Animats wrote:
       | Open systems ungood. Duty of thinkpol to enforce goodspeech.
       | Prevent crimethink. Users read only prolefeed.[1]
       | 
       | [1] https://genius.com/George-orwell-nineteen-eighty-four-
       | append...
        
       | ddingus wrote:
       | There are at least a few kinds of bad:
       | 
       | Spam, google bombing, and related activities. These are noise,
       | generally.
       | 
       | Misinformation is slippery. Often this gets conflated with
       | differences of opinion. That is happening a lot right now as
       | moderation is politicized and weaponized. More than we think is
       | debatable and should be debated rather than legislated or
       | canonized into an orthodoxy, flirting with facism.
       | 
       | Clearly criminal speech, kiddie pr0n, inciting violence, etc.
       | These are not noise and can be linked to either real harm as a
       | matter of the production of the speech (kiddie pr0n), or can be
       | linked to the very likely prospect of harm. Material harm, is an
       | important distinction segway to:
       | 
       | Offensive material.
       | 
       | Being offended is as harmful as we all think it is. Here me out,
       | please:
       | 
       | To a person of deep religious conviction, some speech can offend
       | them just as deeply. They may struggle to differentiate it from
       | criminal speech, and in some parts of the world this is resolved
       | by making the speech criminal anyway. Blasphemy.
       | 
       | That same speech might be laughable to some who are not
       | religious, or who perhaps hold faith of a different order, sect.
       | 
       | Notably, we have yet to get around to the intent of the speaker.
       | 
       | Say the intent was nefarious! That intent would hit the mark
       | sometimes, and other times it would not.
       | 
       | Say the intent was benign. Same outcome!
       | 
       | With me so far?
       | 
       | Before I continue, perhaps it makes sense to match tools up with
       | speech.
       | 
       | For the noise, rule based, AI type systems can help. People can
       | appeal, and the burden here is modest. Could be well distributed
       | with reasonable outcomes more than not. Potentially a lot more.
       | 
       | Misinformation is a very hard problem, and one we need to work
       | more on. People are required. AI, rule based schemes are blunt
       | instruments at best. Total mess right now.
       | 
       | For the criminal speech, people are needed, and the law is
       | invoked, or should be. The burden here is high, and may not be so
       | well distributed, despite the cost paid by those people involved.
       | 
       | Offensive material overlaps with misinformation, in that rule
       | based, and AI systems are only marginally effective, and people
       | are required.
       | 
       | Now, back to why I wrote this:
       | 
       | Barring criminal speech, how the recipient responds is just as
       | important as the moderation system is!
       | 
       | I said we are as offended as we think we are above, and here is
       | what I mean by that:
       | 
       | Say a clown calls you an ass, or says your god is a false god, or
       | the like. Could be pretty offensive stuff, right?
       | 
       | But when we assign a weighting of the words, just how much weight
       | do the words of a clown carry? Not much!
       | 
       | And people have options. One response to the above may be to
       | laugh as what is arguably laughable.
       | 
       | Another may be to ask questions to clarify intent.
       | 
       | Yet another option is to express righteous indignation.
       | 
       | Trolling, along with misinformation share something in common,
       | and that is they tend to work best when many people respond with
       | either righteous indignation (trolling), or passionate
       | affirmation and or concern. (Misinformation)
       | 
       | Notably, how people respond has a major league impact on both the
       | potency and effectiveness of the speech. How we respond also has
       | a similar impact on how much of a problem the speech can be too.
       | 
       | There are feedback loops here that can amplify speech better left
       | with out resonance.
       | 
       | A quick look at trolling can yield insight too:
       | 
       | The cost of trolling is low and the rewards can be super high! A
       | good troll can cast an entire community into grave angst and do
       | so for almost nothing, for example.
       | 
       | However, that same troll may come to regret they ever even
       | thought of trying it in a different community, say one where most
       | of its members are inoculated against trolling. How?
       | 
       | They understand their options. Righteous indignation is the least
       | desirable response because it is easily amplified and is a very
       | high reward for the troll.
       | 
       | Laughing them off the stage can work well.
       | 
       | But there is more!
       | 
       | I did this with a community as it was very effective:
       | 
       | Assign a cost to speakers who cost more than their contributions
       | deliver value! Also, do not silence them. Daylight on the whole
       | process can be enlightening for all involved as well as open the
       | door for all possible options to happen.
       | 
       | People showed up to troll, stayed for the high value conversation
       | and friends they ended up with.
       | 
       | Others left and were reluctant to try again.
       | 
       | The basic mechanism was to require posts conform to one or more
       | rules to be visible. That's it.
       | 
       | Example costs:
       | 
       | No 4 letter words allowed.
       | 
       | Contribution must contain, "I like [something harmless]"
       | 
       | Contribution may not contain the letter "e".
       | 
       | And they have to get it right first time, and edits are evaluated
       | each edit. Any failure renders the contribution hidden.
       | 
       | Both of these did not limit expression. They did impose a cost,
       | sometimes high (no letter "e"), sometimes subtle (no four letter
       | words)...
       | 
       | But what they did do was start a conversation about cost, intent,
       | and
        
       | proc0 wrote:
       | I agree with the article but it's a bit shallow or too short
       | maybe. It's not factoring identity, which is a huge factor when
       | it comes to moderation. Most accounts are basically people being
       | anonymous in respect to their real identity. Then there is bots
       | and AI, and the problem of detecting who is legit or a bad actor.
       | 
       | Therefore, having a relatively miniscule number of people be the
       | judge and expect them to not abuse power, or thinking some clever
       | algorithm won't be exploited, is short sighted and maybe
       | technically naive. I don't know what the solution is, but it
       | might be having the Internet become independent from all nations,
       | and have it be its own nation with laws, etc... not sure, but it
       | does seem like an analogous problem to physical humans living
       | together in a civilization, it's just still in the making, it
       | seems.
        
       | root_axis wrote:
       | The system works as it is. Each website does its best to moderate
       | content without harming the business. From a business
       | perspective, it'd be ideal to never moderate content, because
       | more viral content means more money, but advertisers,
       | governments, and users have a problem with some content, so the
       | websites have to take a more nuanced approach to balancing the
       | desires of these groups. At the end of the day, there is no
       | perfect solution, but that's ok, the web is a federated network
       | of websites and each node can set their own priorities with
       | respect to the interests they determine make the most sense for
       | them, leaving the users and advertisers the freedom to use as few
       | or as many websites as suit their own prerogatives.
        
       | endisneigh wrote:
       | My response to the title is: "No, but it requires more resources
       | than most are willing to admit or give."
       | 
       | The crux of the issue is that there's no "cost" to be bad. If
       | there was a "cost" then bad actors would go away for quickly. Any
       | "cost" you impose will be diametrically opposed to popularity -
       | but a low volume/unpopular site is unlikely to be abused to begin
       | with.
        
         | mikepurvis wrote:
         | Even with infinity resources available for manual human
         | moderation, you eventually hit a wall where different sub-
         | communities will simply have different standards for what is
         | acceptable to them-- what might be reasonable debate in one
         | circle is gaslighting and triggering to people elsewhere. It's
         | not really up to the platform to impose a global code of
         | conduct, and attempting to do so (outside of banning the most
         | obvious of bad behaviours or things that are actually illegal)
         | never seems to go well for platforms.
         | 
         | So yeah, I agree with TFA in the sense that these are problems
         | to be solved largely at the system level. For example, compared
         | to Twitter (where anyone can reply-to, quote, RT, and @user
         | anyone), Twitch and Tiktok seem to do well at permitting
         | individual creators to have their own space with their own
         | exclusive authority over what is and isn't okay in the space.
         | And they have (or at least enable to exist) lots of tools for
         | exerting that authority-- witness things like "bye trolls"
         | scripts on Twitch that do have to be set up in advance, but
         | then can be used at the drop of a hat in response to brigading
         | to immediately close the stream to new followers, and disallow
         | posts from non-followers, plus delete chat posts from anyone
         | who joined the stream in the last X minutes.
        
           | [deleted]
        
         | throwaway3699 wrote:
         | The problem is the costs of content moderation are not linear.
         | You are not dealing with a few thousand trolls. You're dealing
         | with bot farms impersonating possibly over a million accounts.
         | Huge groups of networks operated by just a few dozen people.
         | 
         | Automating that away is the only path to being on equal
         | footing. If you introduce any human element, not only will it
         | be a bottleneck, but the cost could be large enough to bankrupt
         | even the largest companies.
        
           | kingsuper20 wrote:
           | "You are not dealing with a few thousand trolls."
           | 
           | In my own experience, it's the trolls that are rather
           | confounding.
           | 
           | Go to any twitter poster with even a slightly political bent.
           | Look for the first shitty person. Look at their posting
           | history. It'll nearly always be 100 post/day of shittiness
           | telling you just what you need to hear. Unless the evil
           | Russians are extremely clever, it all appears to be
           | grassroots poor behavior.
           | 
           | I guess you can view social media as a giant laboratory
           | showing the behavior of people when they are not nose-to-nose
           | with you in a bar. It's all super disappointing.
           | 
           | Maybe there's a place for highly curated social media.
        
           | michaelmrose wrote:
           | How about making people put down a bond of even a small
           | amount say $10 and something tied to your actual ID. The
           | registrar knows who you are but sites only know you are a
           | verified person but not who you are.
           | 
           | If you are found to be a fake person you forfeit the bond.
           | Now it costs $100,0000 to create 10k fake people and you can
           | lose it all tomorrow.
        
           | cryptoz wrote:
           | > but the cost could be large enough to bankrupt even the
           | largest companies.
           | 
           | In my opinion, if you can't hire enough people to moderate
           | without going bankrupt, then bankrupt you go! Would this mean
           | we can't have social media? Maybe. Probably for the best.
           | 
           | But you can't moderate then you go out of business. That's
           | the way it should be. Probably then people would find a way
           | to moderate and stay afloat.
        
             | throwaway3699 wrote:
             | That would apply to every website with a comments section,
             | sadly.
             | 
             | I think the solution to this is smaller, more isolated
             | groups -- with limited edges between. Back to email threads
             | and Mumble servers, imo. The downside is we'll all be
             | living in filter bubbles, but I think any shared community
             | with a common value (like a video game) is better than some
             | ginormous platform like Facebook.
        
               | denimnerd42 wrote:
               | I am really disappointed that all the php bulletin board
               | forums I used to visit as a teen and young adult have all
               | died off or been sold to an advertising conglomerate and
               | most of the users have fled to Facebook groups. Facebook
               | is just not the same as a forum. The quality of posts is
               | lower, the reposting is much higher, and the sense of
               | community is actually lost.
               | 
               | Facebook has even killed craigslist. Or at least greatly
               | reduced the usefulness.
        
       | Viliam1234 wrote:
       | Moderating a forum where anyone can post is playing whack-a-mole,
       | especially if registering the new account is simple.
       | 
       | One possible approach is something like Stack Exchange does: new
       | users acquire their rights gradually. New accounts can only do
       | little damage (post an answer that appears at the bottom of the
       | list, and is made even less visible when someone downvotes it),
       | and if they produce bad content, they will never acquire more
       | rights.
       | 
       | Another possible approach would be some vouching system:
       | moderator invites their friends, the friends can invite their
       | friends, everyone needs to have a sponsor. (You can retract your
       | invitation of someone, and unless someone else becomes their
       | sponsor, they lose access. You can proactively become a co-
       | sponsor of existing users. Users inactive for one year
       | automatically retract all their invitations.) When a user is
       | banned, their sponsor also suffers some penalty, such as losing
       | the right to invite people for one year.
       | 
       | There are probably other solutions. The idea is that accounts
       | that were easy to create should be even easier to remove.
        
         | rocqua wrote:
         | I don't think the issue is that it is too easy to change
         | identities. Vouching or slow starts both lead to much more
         | closed systems. You could say that is a solution for the posed
         | problem. "A more closed system".
         | 
         | But to me, a more closed system is less valuable. Certainly it
         | lacks the network effects that seem to be needed these days to
         | make it financially.
        
         | thangalin wrote:
         | > There are probably other solutions.
         | 
         | For moderated deliberation to achieve consensus in decision
         | making, here's a write-up for a system that combines ideas from
         | StackOverflow, Reddit, and Wikipedia:
         | 
         | https://bitbucket.org/djarvis/world-politics/raw/master/docs...
        
         | dmos62 wrote:
         | > vouching system
         | 
         | https://en.wikipedia.org/wiki/Web_of_trust
         | 
         | I've not examined it closely, but Web of Trust follows that
         | train of thought at least to an extent.
        
           | rocqua wrote:
           | "Trust" in this context means "I believe this key indeed
           | matches this identity". Nothing more is meant by that.
        
       | PicassoCTs wrote:
       | Years ago, i suggested a algorithmic approach to moderation to a
       | Open Source project i contribute too. They ultimately went
       | another way (classic moderation), but the idea was pretty neat.
       | 
       | You basically create a feudal vouching system, were highly
       | engaged community members, vouch for others, who again vouch for
       | others. If people in this "Dharma-tree" accumulate problematic
       | behaviour points, the structure at first, bubbles better behaved
       | members to the top. If the bad behaviour continues, by single
       | members or sub-groups, the higher echelon of members will cut
       | that branch loose, or loose their own social standing.
       | 
       | Reapplying members, would have to apply at the lowest ranks
       | (distributing the work) and those would risk dharma loss if they
       | vouched for sb unworthy in the trial out phase.
       | 
       | We never solved the bad tree problem though. What if there exists
       | a whole tree of bad apples vouching for another? You can not rely
       | on the other feudal "houses" indicating this correctly, due to
       | ingame rivalry.
        
         | PragmaticPulp wrote:
         | I was part of solving a content moderation problem for a tech
         | company forum once.
         | 
         | The most troublesome users were often the most prolific
         | posters. The people who had a lot of free time on their hands
         | to post every single day were often the most disgruntled,
         | seizing every issue as a chance to stir up more controversy.
         | 
         | It was tough enough to reel in difficult users when they had no
         | power. Giving extra power to people who posted the most would
         | have only made the problem worse, not better.
         | 
         | The most valuable content came from users who didn't post that
         | often, but posted valuable and well-received content
         | occasionally. I'm not sure they would have much interest in
         | moderating content, though, because they would rather produce
         | content than deal with troublesome content from other people.
         | 
         | Content moderation is a pain. The trolls always have infinitely
         | more free time than you do.
        
           | ericbarrett wrote:
           | I used to moderate a few message boards, and I fully agree.
           | 
           | I think empowering the "power users" like this inevitably
           | leads to Stack Overflow-style communities, where arrogant
           | responses are the norm, the taste of a few regulates the
           | many, and the culture of the community ossifies because new
           | contributors are not welcomed.
        
           | ycombinete wrote:
           | How did you go about solving the problem in the end?
        
         | remram wrote:
         | This is easy to solve if you _don 't_ have a "public timeline",
         | e.g. if you only see posts that have been vouched by people you
         | follow. Like using Twitter but without topics, hashtags, and
         | search: the only content you see has been either authored by
         | someone you directly follow, or retweeted by someone you
         | directly follow.
         | 
         | If you keep seeing content you like (through retweets), you can
         | follow that person directly to get more. If you see content you
         | dislike, you can unfollow the person who brought it into your
         | timeline (by retweeting it).
         | 
         | Of course this would work a bit better if there was a way for
         | accounts to categorize posts they author or retweet. You might
         | follow me for tech-related content but not care much about my
         | French politics content, which I would be happy to categorize
         | as I post/retweet but have no way to do on current Twitter.
        
         | natrius wrote:
         | The lazy solution to rivalry getting out of control is
         | bicameralism. Make tree-based governance where most of the
         | action is, but design another chamber that can veto it without
         | the same rivalries involved.
        
         | intended wrote:
         | Just as there is no stopping "crime" there is no stopping bad
         | content.
         | 
         | Besides - these are evolutionary games being played between
         | grazers (content consumers) and predators (alarmingly large
         | group).
         | 
         | As long as there is signal in a forum, there will come to be a
         | method to subvert it.
         | 
         | Honestly the question I would ask people is how do you measure
         | bad behavior on a forum.
         | 
         | Any technical idea, such as your tree, is doomed to eventual
         | obsolescence. The question is how long it would take, and how
         | effective it would be, and how you would measure it.
        
         | [deleted]
        
         | marcosdumay wrote:
         | > What if there exists a whole tree of bad apples vouching for
         | another?
         | 
         | That's when you add top moderation, so the algorithm becomes a
         | way to scale the moderators, not a full moderation solution.
         | 
         | You can't create an algorithm that solves moderation, unless
         | you create a fully featured AI with a value system.
        
           | clairity wrote:
           | yes, let computers do the repeatable work and humans do the
           | original thinking.
           | 
           | i still haven't seen a moderation system better than
           | slashdot, which community-sourced its moderation/meta-
           | moderation semi-randomly. though it still had issues with
           | gaming and spam, it seems like a good base to build from. and
           | yet we ended up with twitter, facebook, reddit, yelp, etc.
           | that optimize for (ad) views, not quality.
        
           | remram wrote:
           | You can also test this, similarly to how Stackoverflow does
           | it: send people a post that you know is bad (or good) and
           | check that they flag it. If they don't, let them know that
           | they are doing it wrong, lock them out of moderation, or
           | silently ignore their voting and use it as a signal of voting
           | rings.
        
         | cortesoft wrote:
         | Also, if there is a severe penalty for vouching for bad people,
         | but not much gain for vouching for someone, will you end up
         | with no one wanting to vouch for anyone else?
        
           | karpierz wrote:
           | Generally the benefit of vouching for someone is that they
           | join the community, and you personally want them to join the
           | community.
        
             | dane-pgp wrote:
             | Another reason to vouch for someone is that you trust their
             | judgement and want them to have more power in the system to
             | hide content that you personally don't want to see.
             | 
             | It's true that this will lead to echo chambers, but by
             | looking at vouching relationships rather than the contents
             | of posts, it should be easier to detect the echo chambers
             | and give people the opportunity to expand their horizons.
        
               | michaelmrose wrote:
               | In a system that tends to reward closing your horizons
               | with a sense of safety and belonging the trend wont be
               | towards expanding horizons. Don't build systems that
               | don't work like you want them to in practice because in
               | theory people could use them better.
        
         | TrainedMonkey wrote:
         | That sounds like a pretty fantastic way to build an echo
         | chamber.
        
           | marshmallow_12 wrote:
           | it would become an echo chamber.
        
           | simmanian wrote:
           | I think we need to critically evaluate what we call echo
           | chambers. The continent, country, state, city, street you
           | live in all exhibit patterns of echo chambers. In a sense,
           | our planet itself is an echo chamber. Every human network is
           | an echo chamber that boosts signals to varying degrees. A lot
           | of times, this is a good thing! Like when people come
           | together to help each other. The real problem is when the
           | network itself is designed to boost certain signals (e.g.
           | outrage, controversy) over others to a point where our
           | society breaks down. Many of today's centralized networks
           | profit greatly from misinformation, anger, and other negative
           | signals. IMO that is the problem we need to tackle.
        
           | weird-eye-issue wrote:
           | It's funny that you comment that on HN
        
             | michaelmrose wrote:
             | Which has a single front page which shows the same
             | headlines to everyone where people who disagree can all see
             | each others posts and we can disagree with each other so
             | long as we can avoid being jerks to one another.
             | 
             | At worst you lose imaginary internet points if you say
             | something that the group doesn't agree with.
        
               | weird-eye-issue wrote:
               | Okay
        
         | morelisp wrote:
         | Everything old is new again.
         | 
         | https://www.levien.com/free/tmetric-HOWTO.html
         | 
         | https://en.wikipedia.org/wiki/Advogato
        
         | renewiltord wrote:
         | Lobste.rs uses a similar tree model. It is invitation only and
         | if you invite bad people repeatedly, you will get smooshed.
        
         | jonnycomputer wrote:
         | I like this, a lot. Well, I don't like that it sounds like it
         | will prevent outsiders from participating, those who have no-
         | one who will vouch for them, and it does sound like it would
         | encourage a mono-culture of thought. But I like the idea of
         | socializing the costs of bad behavior. Indeed, those socialized
         | costs would extend to the real world. I'm intrigued and
         | perturbed at same time.
        
         | crazygringo wrote:
         | Yes, I'm convinced at some point we're going to figure out an
         | algorithm to solve content moderation with _some_ version of
         | crowdsourcing like this based on reputation, though I 'd prefer
         | a system based on building up trustworthiness through one's
         | actions (consistently flagging similarly to already-trustworthy
         | people).
         | 
         | But the challenge is still the same one you describe -- what do
         | you do with competing "groups" or subcommunities that flag
         | radically different things. What do you do when both supporters
         | of each side of a civil war in a country consider the other
         | side's social media posts to be misinformation and flags them?
         | Or even just in a polarized political climate?
         | 
         | I still think (hope) there would have to be _some_ kind of
         | behavioral signal that could be used to handle this -- such as
         | identifying users who are  "broadly" trustworthy across a range
         | of topics/contexts and rely primarily on their judgments, while
         | identifying "rings" or communities that are internally
         | consistent but not broadly representative, and so discount that
         | "false trustworthiness".
         | 
         | But that means a quite sophisticated algorithm able to identify
         | these rings/clusters and the probability that a given piece of
         | content belongs to one, and I'm not aware of any algorithm
         | anyone's come up with for that yet. (There are sites like HN
         | which successfully detect small _voting_ rings, but that 's a
         | far simpler task.)
        
           | foerbert wrote:
           | I wonder if you could try to address this by limiting who can
           | flag a given post.
           | 
           | Even just doing it very naively and choosing, say, a fifth of
           | your users for each post and only giving them the option to
           | flag it might help significantly. It would probably make it
           | more difficult to motivate the members of these problematic
           | groups to actually coordinate if the average expected result
           | was the inability to participate.
           | 
           | And you could do it in more sophisticated ways too, and form
           | flag-capable subsets of users for each post based on
           | estimates about their similarity, as well as any other
           | metrics you come up with - such as selecting more
           | "trustworthy" users more often. This would help gather a
           | range of dissimilar opinions. If lots of dissimilar users are
           | flagging some content, that seems like it should be a strong
           | signal.
        
           | jonnycomputer wrote:
           | Sounds like a network analysis problem to tackle.
        
           | michaelmrose wrote:
           | Presumably instead of global moderation you could have
           | pluggable meta moderation where you pick the moderators so
           | you can have fun stuff like Moderator A whom you follow
           | banned Bob therefore you can't see his posts or his comments
           | on your posts but I don't follow A I follow B who believes
           | Bob is a paragon of righteousness and so I see Bobs words
           | everywhere and we all in effect have an even more fragmented
           | view of the world than we have today with conversations that
           | are widely divergent even within the same social media group
           | or thread.
        
         | michaelmrose wrote:
         | I think lousy annoying manual moderation in smaller communities
         | is hard to beat. Human beings have flaws but we have hundreds
         | of thousands of years of adaptation to making small social
         | circles work that might not work AS well in groups of hundreds
         | or low thousands but they can be made to work acceptably.
         | 
         | When you say highly engaged community members I hear people who
         | have no life who derive self importance via imaginary internet
         | points and social position not by doing things but by running
         | their mouths. While it claims to encourage community it
         | discourages it by potentially punishing association. It would
         | make people afraid of being associated with ideas others
         | consider bad which sounds great if you imagine communities run
         | by people that are largely or mostly good and intelligent when
         | in fact people are largely bad selfish and stupid.
         | 
         | It would be ruthlessly gamed by individuals whose status would
         | be based on their efforts to stir up drama that sounds
         | fantastic when its directed at people like Epstein or Harvey
         | Weinstein less so when you realize that this would be effective
         | at silencing people regardless of guilt because people would
         | need to as you say cut the branch loose.
         | 
         | I have literally never heard a worse system of meta moderation
         | proposed.
        
         | mdoms wrote:
         | Sounds like a way to create highly entrenched filter bubbles.
        
         | da_big_ghey wrote:
         | This is much like how the private torrenting trackers are
         | doing, but no very number of points. So maybe is existing some
         | precedence for some system in this like.
        
         | inetknght wrote:
         | Guilty by association it is, then. And, no way to undo/pay for
         | a negative score. This is a terrible solution.
        
         | paxys wrote:
         | Every system that relies on crowdsourcing and/or reputation to
         | solve such problems is doomed to fail. Remember when manual
         | curation and recommendation of products/places/content was
         | supposed to be fully replaced by online ratings & reviews?
        
           | crazygringo wrote:
           | > _was supposed to be fully replaced by online ratings &
           | reviews?_
           | 
           | I mean, it _has_ though.
           | 
           | When I want to buy something new, I find Amazon reviews to be
           | far more helpful than anything else that has ever existed.
           | Obviously you can't _only_ look at ratings or _only_ read the
           | first review, but it 's pretty easy to find the signal amid
           | the noise.
           | 
           | Similarly, TripAdvisor has given me _far_ better
           | recommendations of sights to see while traveling when
           | compared to Lonely Planet. Yelp is eons better for finding
           | great restaurants than Zagat ever was. And so on.
           | 
           | I don't understand how you think these systems are "doomed to
           | fail" when they already exist, are used by hundreds of
           | millions of people, and are better than what they replaced?
        
         | im3w1l wrote:
         | This is a web of trust, except that you have a designated root.
        
       | lifeisstillgood wrote:
       | >>> One could also think of big European cities before modern
       | policing - 18th century London or Paris were awash with sin and
       | prone to mob violence, because they were cities.
       | 
       | Is the solution in the article? Do we simply need to recognise
       | that as all society is online we now need online police? Online
       | Community support officers (UK police adjuncts (think teaching
       | assistants).
       | 
       | I suspect there is an overlap with the "defund the police"
       | movement and the notion that we need to take away a lot of
       | policing functions that are not actually violence / crime related
       | - eg mental health. Social work is .. a lot of work.
       | 
       | Edit: It's worth noting that there are ~24 million or more police
       | officers whose job description simply did not exist before Sir
       | Robert Peel. That's a bigger number than i imagined !
       | 
       | wow: https://www.worldatlas.com/articles/list-of-countries-by-
       | num...
        
         | intended wrote:
         | Yes.
         | 
         | And that brings up the issue of how, exactly, are we supposed
         | to allow a literal "thought police".
        
           | lifeisstillgood wrote:
           | It's not a thought police firstly. It's a published statement
           | police (pretty sure there are lots of laws like "disturbing
           | the peace")
           | 
           | This will play out over time - call it 30 years - as we try
           | to find out how to do lots of new things
           | 
           | - Handle mental health better. Partly we need medical
           | breakthroughs, but social acceptance will be a huge
           | improvement, as will standardised approaches, early years
           | interventions and detection. It will take huge amounts of
           | parent training - and the recognition that has genuine costs
           | (how many start ups did not start because the parent decided
           | to put their energy into supporting a child. And woe betide
           | anyone suggesting that is another hurdle to be overcome with
           | go getting attitude)
           | 
           | So we need huge investment in dealing with chronic mental
           | health, not just medical, but social support, education etc.
           | 
           | - then sorting out acute. Look at any UK prison. It's
           | basically young men with some mix of drug / mental health /
           | abuse issues. Want to reduce the prison population - start 20
           | years ago. Don't define the police - simply introduce
           | interventions so that in 20 years they won't need to do the
           | social work job they do now
           | 
           | There is probably a dozen brilliant papers on this clearly
           | showing what we should do and already modelled in a few
           | enlightened communities globally. But it's going to take a
           | decade of mistakes before those percolate up.
           | 
           | Let me know if you spot the me early.
           | 
           | After that it's social norms. We are trying to find a set of
           | behaviours that are acceptable in the new online spaces.
           | Public urination is frowned upon IRL - trolling is the online
           | equivalent I guess. One can imagine things like loss of
           | anonymity being the first part. Then slowly people develop
           | tools to use humans inbuilt social mechanisms - so for
           | example some asshat is intolerable, a record of their
           | conversation is sent to the mother-in-law, 4 grandparents,
           | all their wife's bridesmaids. (this may not work but you get
           | the idea). We know this sort of thing works because every so
           | often we all find that great viral thread where someone gets
           | a comeuppance.
           | 
           | All of this does demand content moderation as ben says - and
           | yes I do think all of this is too much. My take is social
           | media will die back to manageable levels - if we are looking
           | at a mass crowd situation, a swirling football crowd and
           | asking, how can this crowd ever become a manageable city.
           | Well crowds don't - crowds become cities, they disperse.
           | 
           | - There are too many forms of social media - each of us has
           | one main form, and keep up with rest - just as we would have
           | one friend in a crowd but interact with others. This will
           | just die down. I mean how much value do we get from social
           | media versus the constant worry / mind back that makes us
           | check all the time. as humans develop social media
           | innoculations this will die back
           | 
           | Secondly ad dollars will help die back social media - really
           | it's amazing no one has noticed it's a scam and waste money f
           | money.
           | 
           | At some point regulations and advertising will drive to a
           | point where it is simply easier to dump the algorithm, stop
           | driving for engagement and just supply the limited feed of
           | friends posts, interspersed with billboard ads. Influencers
           | will still influence, ads will still pop up, but as we are
           | choosing who to follow we won't care. Just follow fewer
           | people.
           | 
           | Agents - the other big silent one. I can run a refer over my
           | emails but the facebook client won't let me. This is the
           | other big change likely to occur - software agents acting in
           | my best interest filtering shit for me.
           | 
           | Anyway bed time
        
       | bjt2n3904 wrote:
       | This article is absolutely fantastic. Excellently written.
       | 
       | I've always maintained Facebook made a mistake when they took
       | responsibility for misinformation posted on their platform. Now,
       | four years later, they're continuing to double down on this
       | stance, and forming "ministries of truth".
       | 
       | Freedom of speech is a powerful concept, and does not like to be
       | stifled by types that argue it should only apply to the
       | government. When you fight against that principle, you win in the
       | short term, and lose in the long term. We are now starting to see
       | those ugly realities of the long term losses, four years later.
       | 
       | The author is touching on something prescient here, but I
       | disagree with some of his observations. For example, that the
       | solution to "virus scanners" playing whack a mole was to move to
       | cloud computing. (The solution was clearly to improve software,
       | with things like memory safe programming langauges. Moving to
       | cloud computing reduces freedom, not enhances it, and centralizes
       | all the valuables in a single location a la Tower of Babel.)
       | 
       | If you remove the "algorithmic feed" mechanic, much of the abuse
       | vanishes instantly. I am shown what came latest. Not this weird
       | algorithmic mash of content that has been gamified for my
       | attention. RSS is the way to go.
        
       | kingsuper20 wrote:
       | Any ex-Usenet moderators out there?
       | 
       | I don't remember this being such a huge deal, and there was
       | always the alt groups.
        
         | dsr_ wrote:
         | Current Usenet moderator here, of an exceedingly low traffic
         | group.
         | 
         | In the last year I've only had to kill two posts, both from the
         | same troll.
         | 
         | On the other hand, I use my personal killfile quite liberally.
        
         | Mediterraneo10 wrote:
         | Things have changed since the 1990s. Some mild homosexual slurs
         | might have made their way through Usenet moderation, and trans
         | issues were not even on the radar. Today, those favouring
         | content moderation expect what they see as anti-LGBT attitudes
         | to be filtered out.
         | 
         | Also, Usenet was so niche that state actors weren't running
         | troll armies for propaganda purposes, but this is something any
         | modern social network has to deal with.
        
         | jfengel wrote:
         | A lot of the language we use for dealing with moderation was
         | developed for Usenet and related systems: trolling, spam,
         | flames, etc. The problem was bad enough that we developed names
         | for it.
         | 
         | There was certainly less of it; all of Usenet fit in a box of
         | magtapes. But it could still have some pretty big tempests in
         | that teapot.
         | 
         | It never even got close to solving spam, which exploded after
         | Eternal September. It took AI-esque systems (and enormous heaps
         | of data to feed them) to reduce spam to a manageable level.
         | Trolling is a harder problem than spam.
        
         | bombcar wrote:
         | Usenet self-selected for (relatively) wealthy (usually)
         | Americans who had the intelligence and know-how to get online
         | at a time when it was costly and difficult.
         | 
         | And it fell to Eternal September.
         | 
         | The really only way to moderate a group is keep the group
         | small.
        
       | intended wrote:
       | No?
       | 
       | I'll argue that the main product of reddit is not the community,
       | its the content moderation. Maybe eventually the content
       | moderation toolkit.
       | 
       | At this point reddit mods and users must have collectively built
       | the largest collection of regexes to identify hate/harmful speech
       | For a huge number of sub cultures.
        
       ___________________________________________________________________
       (page generated 2021-04-13 23:01 UTC)