Post B0HelwMMP3fIdQXdB2 by poetaster@mastodon.gamedev.place
(DIR) More posts by poetaster@mastodon.gamedev.place
(DIR) Post #B0FID1pdTmCKACGLJI by timbray@cosocial.ca
2025-11-14T17:40:41Z
0 likes, 2 repeats
Over on #Bluesky there was a bit of a controversy over the suspension of Sarah Kendzior. I thought it was an interestingly nontrivial moderation problem, so I wrote up a little case study on how this would have been handled on Mastodon: https://www.tbray.org/ongoing/When/202x/2025/11/13/Kendzior-Case-Study#moderation
(DIR) Post #B0FLWtNGvvefaT9vBg by tomjennings@tldr.nettime.org
2025-11-14T19:54:56Z
0 likes, 0 repeats
@timbray Nicely done.
(DIR) Post #B0FZuioNFwT18S5oWW by ricci@discuss.systems
2025-11-14T22:36:04Z
0 likes, 0 repeats
@timbray Something that is somewhat implicit in the final section, but that I want to expand to: on the Fediverse, if you are on a small to medium sized server, there is a good chance that the moderator and reported user know each other to some extent, and are much more likely to have established some level of trust - this is why the moderator is more likely to understand the user's intention, and why the user is likely to take a warning seriously.This is why it actually scales *better* than moderation on big instances or centralized platforms
(DIR) Post #B0GdeVkqcxyCyuvewS by ricci@discuss.systems
2025-11-15T10:52:41Z
0 likes, 0 repeats
@laurenshof @timbray Let's be clear, though: literal death threats are allowed on Bluesky. You are even allowed to post videos of you killing people along with the death threat, to prove that you are both serious about carrying out the death threat and capable of doing so. https://bsky.app/profile/deptofwar.bsky.social/post/3m4z6dwopcc2d
(DIR) Post #B0GdiHBqih0sy8M0sS by ricci@discuss.systems
2025-11-15T10:53:22Z
0 likes, 0 repeats
@laurenshof @timbray To me the question is how far the notion of "community standards" scales. Does it scale to groups of hundreds? thousands? millions? billions? Both Bluesky/Atmosphere and the fediverse recognize that there is *some* limit - it's built into the structure of the fediverse, and Bluesky built 'stacked moderation' via the labeling system for this purpose.
(DIR) Post #B0HbcT5fYYPrOCWg7c by ricci@discuss.systems
2025-11-15T22:04:34Z
0 likes, 0 repeats
@laurenshof @timbray > and yes, that is indeed an unfair application of the rules, but authoritarian regimes arent fair, thats the world we live inIt's also a good illustration of why placing so much moderation power in one set of hands is - well, not out of the norm for modern social networks - but very dangerous in today's world.I've been working on a metric to quantify the power of moderators (https://codeberg.org/ricci/are-we-decentralized-yet/src/branch/main/BIndex.md) and it takes 13,516 fediverse moderator teams to equal the blocking power of the one moderator team at Bluesky PBC. (that is, one moderation decision at Bluesky can cut someone off from about 99.5% of the network - to cut someone off from that fraction of the Fediverse means they have to be blocked by over 13k instances).And I'm very aware of the fragmented nature of the fediverse, the ways in which mods are inconsistent and/or unfair, the ways in which fights between admins fracture parts of the network, etc. So I'm not trying to sugarcoat one network here. But this is a *lot* of power. It's no wonder that's the network the authoritarians would go after (in addition to its greater size, and generally higher level of publicity.)
(DIR) Post #B0Hd1U5t3vClhB7WyW by ricci@discuss.systems
2025-11-15T22:20:19Z
0 likes, 0 repeats
@laurenshof @timbray I see your point here, but I also see it as not a matter of "where are death threats allowed" but "where are these statements considered actual condoning of violence and where are they not" - there are cultural (and I don't just mean in an international sense, but in a "community" sense as well) where certain statements are seen as actual intent to do harm, or to encourage others to do so, and others where they are not.A platform like today's Bluesky (I know, they aspire to be - and are - moving in a more decentralized direction) where one moderation team handles 99.5% of the network's users has to pick one standard and apply it everywhere (well, with exceptions for the authoritarian regimes, as we discussed elsewhere in this thread). And of course this standard is not at *all* neutral. It is a white American standard. That's fine, if that's the userbase they want.But this is what I mean when I say that it's a good thing for moderators and reported users to have an established relationship and established trust. It's not just because "hey my buddy is not going to kick me off", it's because, in an online community that is closer in size to a natural human community, the social tools for establishing expected behavior are - certainly imperfect - but much more human. Whereas a distant, faceless moderator team is a necessity at very large scale, but it is also fundamentally more authoritarian.
(DIR) Post #B0HdO2IIaTng8VgBPc by ricci@discuss.systems
2025-11-15T22:24:24Z
0 likes, 0 repeats
@laurenshof @timbray By the way, I've built a couple of tools for putting numbers behind these things. If you haven't seen them, they are https://arewedecentralizedyet.online/ and https://moderation-explorer.online/
(DIR) Post #B0HelwMMP3fIdQXdB2 by poetaster@mastodon.gamedev.place
2025-11-15T22:39:54Z
0 likes, 0 repeats
@ricci thank you for your measured additions to @timbray measured analyses. I tend to the very liberal end of permissiveness but also do counseling, on occasion. Hence, I tend to self-censor heavily simply because I'd prefer to draw ire for a good reason. But I have never felt it's imposed on me here on Mastodon. Civility seems more common here :-)