[HN Gopher] Facebook bans Holocaust film for violating race policy
___________________________________________________________________
Facebook bans Holocaust film for violating race policy
Author : pr0zac
Score : 306 points
Date : 2022-09-16 05:57 UTC (17 hours ago)
(HTM) web link (www.rollingstone.com)
(TXT) w3m dump (www.rollingstone.com)
| [deleted]
| hrbf wrote:
| > Mark Zuckerberg has created a monster that has no oversight.
|
| You don't say? I'm shocked I tell you.
|
| Hopefully this will at least generate more buzz for the movie
| than Facebook could have.
| pGuitar wrote:
| They tell you that there's no oversight so that they aren't
| held responsible.
| croes wrote:
| So what skin color can a person with blue eyes have?
|
| Every possible.
| rthomas6 wrote:
| This is like the modern day version of not being able to search
| for Moby Dick on my high school's library computer.
| bslorence wrote:
| Except you probably didn't get permanently banned from using
| the library when you tried to search for that.
| yieldcrv wrote:
| Reminds me of how ignorant people used to be about computers
| 25 years ago. That could totally still happen in some areas.
| jquery wrote:
| I got permabanned banned from Twitter last month for sharing an
| image of a smiling anime trans girl saying "STFU Terf". The rule
| violation was "glorifying violence". This permaban was upheld on
| appeal, although curiously the ban reason was changed to
| "abuse/harassment". No other account warnings. My LGBT-pride
| account was oh-so-coincidentally banned shortly after getting the
| attention of some fairly large anti-trans Twitter influencers.
|
| A month later and we have children's hospitals under bomb threats
| because of these same Twitter influencers, yet accounts from Matt
| Walsh and Libs of Tik Tok are still running strong. Twitter has a
| liberal bias? Give me a break.
| kbelder wrote:
| What's "Terf"? A name?
| rat87 wrote:
| https://en.wikipedia.org/wiki/TERF
|
| Trans exclusionary radical feminist
|
| Although sometimes directed at people ho are just
| transphobes, it's aimed at some feminist who don't consider
| trans women women or trans men men and have a very negative
| anti trans views. See JK Rowling controversy
| Overtonwindow wrote:
| Does anyone know if these decisions are made by U.S. based
| checkers, or is it outsourced? When FB does something so asinine
| like this I must consider that it's a horrible cubicle farm in
| S.E. Asia.
| scohesc wrote:
| I bet you my now defunct and cobwebbed Facebook account that
| they're using AI algorithms for large-scale content moderation,
| with individual "suspect" cases being forwarded to an Amazon
| Mechanical Turk style place where people make pennies on the
| dollar following strict lists of instructions where any
| deviation or any leeway in free-thought is punished by
| immediate dismissal.
| cmrdporcupine wrote:
| Here we are, deep down in the dystopia of Automatic Content
| Classification by robots.
|
| Far less important of an example, but I was just put in "Facebook
| jail" for 24 hours for posting a picture of my son at the beach
| in his bathing suit with no shirt. Y'know, as one does at the
| beach. I can only assume it's because my son has long hair and
| the Convolutional Neural Network or whatever decided he was a
| girl and therefore I'm a pervert.
|
| Sadly it was the "appeal" I submitted that got me blocked.
| Presumably by a "human", but who knows.
|
| Before I appealed they were simply going to not show the picture.
| Appealing got me in "trouble." That might be even _worse_ than
| the original misclassification. On top of that, if I was
| _actually_ a "community standards" violator who posted potential
| child exploitation imagery, I'm not sure how a 24-hour ban on
| activity on Facebook is of any use, either? Except I'm terrified
| to imagine a world where Meta might have called police on me
| based on the output of a neural network image classification.
|
| Others have said but I'll say it again: This kind of business
| doesn't scale ethically. You can't have billions of people on a
| bulletin board. It doesn't work. Moderation is essential to
| modern communication. But you can't do moderation automatically
| or at scale and in a universal way.
|
| Very dark patterns emerge the moment you go FAANG scale and toss
| algorithms tuning for advertising and "engagement" into the mix,
| and attempt to do so with the help of computers.
|
| "Sad" as it is, we will need to "retreat" back into smaller
| forums and BBSs where communities self-police.
|
| Facebook has infiltrated so many aspects of society. Want to
| interact with the parents from the local school your kids go to?
| You have to do that on Facebook. Event announcements? Keeping in
| touch with your distant aunt? Facebook.
|
| If something like Facebook is really a universal utility, it will
| have to be put under public administration; like the post office.
| But clearly, that isn't going to happen and would have other
| problems.
|
| I am going to have to find some other way to engage with old
| friends and family.
|
| Frank Herbert had some intuition with his whole "Butlerian Jihad"
| thing.
| protomyth wrote:
| I sometimes wonder what would happen (it won't) if some fed-up
| Congressperson drafted a bill that would allow folks to sue
| Facebook, etc. for libel when it accused you of being a ped0,
| criminal, or other malcontent. Perhaps they would be a bit less
| happy to accuse their users of vile things.
| toomuchtodo wrote:
| Sec 230 repeal? Or something adjacent to that strategy?
|
| https://www.politico.com/news/2022/09/08/white-house-
| renews-...
|
| https://www.whitehouse.gov/briefing-room/statements-
| releases...
| protomyth wrote:
| More in the way of a specific law. I don't think a full Sec
| 230 repeal is a good idea. I really, really want services
| to have to specify what you did wrong, and you should have
| some actual options to deal with these services. They have
| become utilities and need to have some accountability when
| they call you something vile.
| saurik wrote:
| If you are only using Facebook for its original purpose of
| keeping up with old friends and family, then it doesn't
| actually have the scaling problem: if one of your friends or
| someone in your family starts posting a ton of racist bullshit
| you either confront them about it or drop them as a friend
| (preferably bilaterally), and either result is actually better
| for society than having Facebook attempt to present a skewed
| view of them that tries to just pretend they aren't posting all
| of the horrible stuff in the first place (whether by blocking
| it from being posted, quickly removing it once posted, or
| running some complex ranking algorithm that does a good job of
| hiding it).
|
| It is only when you start having strangers talking to strangers
| that you run into a need for moderation, and even there you
| should be able to scale by sharding and punting the problem to
| others: if you have a group--similar to a real-world club--the
| moderation is on you, as the issues in your community shouldn't
| leak to people who haven't joined your community (and if people
| leave your community because you fail, all the better). The
| only real issue is that Facebook wants to--for the increased
| engagement, and thereby ad revenue--run a ton of recommendation
| algorithms that shove content from people you have no
| affiliation to in your face constantly (which one might notice
| should already be considered antithetical to the design of a
| _social network_ ) which leads to a ton of stranger-to-stranger
| interactions that are entirely "on Facebook" to ensure are
| clean.
| ok_dad wrote:
| Facebook was originally gated to just university students
| from specific universities, then they started to open it to
| everyone. It was basically a university bulletin board.
| commandlinefan wrote:
| > using Facebook for its original purpose of keeping up with
| old friends and family
|
| Not that Facebook lets you do that any more. I was off of FB
| for a while and recently rejoined because my community and my
| kid's schools only use FB for communications now. I'm
| connected to a very small circle of actual friends and
| family, but I still get daily political memes in my feed
| (politics I vehemently disagree with as well) no matter how
| many times I try to block them.
| end_of_line wrote:
| The biggest spam I get from Facebook ads itself. I live in
| Switzerland and I am BOMBARDED with financial frauds, scams
| and ponzi like schemes served directly from their ads. I
| checked why in their system and got answer "primary
| location: Switzerland and male 25 - 35 years old". I tried
| to report, block them all but facebook support always if
| replied at all, it was saying all in line with their
| policies. So, I deactivated facebook account and now using
| only messanger
| saurik wrote:
| Yeah sorry, I meant something more like "if you look at the
| goal of the original use case of Facebook" not "if you as a
| user simply use some subset of the website". The greed to
| maintain and grow the valuation of a publicly-traded
| company--and thereby to optimize the entire thing for
| explicitly only maximal revenue (and thereby maximal
| engagement)--has so universally destroyed the dream of
| social networking that we simply don't actually have a
| large social network anymore.
| cmrdporcupine wrote:
| And yet here I was, posting a picture of my kids at the beach
| so my mom and friends could see them, and I ended up banned
| from participating on Facebook for 24 hours and accused of
| violating community decency.
|
| They're f*cked.
| saurik wrote:
| Yeah... so I honestly consider this to be a separate (also
| horrible, for avoidance of doubt) problem than
| "moderation"? Like, Apple and Google aren't exactly having
| a _moderation_ problem when it comes to attempts to curtail
| CSAM stored on their photos platforms (which has led to
| their automated flagging systems and then need to scale to
| the entire world of users appeals and escalation, and then
| the subsequent concern about "are they going to call the
| police on me when I fail my appeal?!")... it is more of an
| attempt to deal with awkward regulatory "think of the
| children" overreach and hostile American law enforcement.
| Facebook seems to me to be doing the same thing here and
| failing.
|
| Imagine a world where social networks were built by people
| who simply didn't care about "engagement" at all and
| weren't being motivated by ad revenue... I think you could
| design an end-to-end encrypted version of the system where
| no-one except your friends--and certainly not the network
| operators--even knew what you were posting in the first
| place and they would hopefully be able to avoid installing
| client-side filters for CSAM (but, with stuff like FOSTA
| and SESTA, maybe not?). This model should even work, I'd
| think, for Twitter/Instagram-like broadcast models (though
| the legal implications of the well-known "secret" key being
| published and accessible to the network might lead to
| various problems; you might have to go fully-
| decentralized).
| jquery wrote:
| We don't just need human moderation, we need due process. These
| companies control too much of our digital lives and make so
| much money from us, but have zero regard for us as soon as it's
| inconvenient to investigate a case, because that would cut into
| their insane margins. I was permabanned for Twitter recently
| merely for getting the attention of some large influencers who
| disagreed with me (no actual rule was broken, but I received
| enough reports that my account was nuked).
|
| I appealed multiple times but Twitter's appeal process is a
| sham for us little people. I doubt any humans ever looked at my
| account.
| swayvil wrote:
| It's similar on Reddit and, yes, even HN. The mere fact that
| you were "flagged" (by one of your peers) means that you are,
| to a significant degree, flagworthy and thus justifiably
| treated like a criminal.
|
| Maybe these people doing the flagging are special people who
| have proven their worth. I dunno.
| root_axis wrote:
| They only control what we voluntarily give them. Social media
| controls nothing of importance in my life.
| dotnet00 wrote:
| Unless you don't interact with anyone, that's a completely
| untenable position due to network effects. eg My family
| (spread around the world) uses WhatsApp for communicating
| with each other, which I'm not much of a fan of but it's
| pretty much impossible to get them over to another platform
| given that I don't live anywhere near them anymore to teach
| them and they have to use it anyway for communicating
| within their residential community etc.
|
| Sure technically I'm not being coerced into using WhatsApp,
| but it isn't exactly reasonable to say that if I really
| cared I would just not talk to my family until they figure
| out how to use a platform I prefer.
| root_axis wrote:
| WhatsApp isn't social media, it's a messaging app, but
| more critically it's based on phone numbers, so WhatsApp
| really has no control over access to your contacts.
|
| > _Unless you don 't interact with anyone, that's a
| completely untenable position due to network effects_
|
| It doesn't have to be this way though. Between sms,
| email, telegram, signal, and discord I have communication
| channels to every person I actually care about, and it's
| trivial to bridge additional layers of communication if
| needed.
|
| > _it 's pretty much impossible to get them over to
| another platform_
|
| I hear where you're coming from, but in my view this is
| an intentionally defeatist attitude. We're throwing up
| our hands and saying "social media owns us and there's
| nothing we can do, it's just too hard to install another
| app". In reality, if it's important, it's not that hard.
| There's no disputing that social media is _convenient_ ,
| but it isn't _vital_.
| dotnet00 wrote:
| I am aware that WhatsApp isn't social media, it's just an
| example of an app I would ideally like to switch away
| from.
|
| I'm not really sure how it's defeatist when trying even
| to only get myself off the app would require making
| things much harder for my relatively tech illiterate
| parents on the other side of the world with no tech
| literate relative to lean on to help them out. With
| Whatsapp they've used it for a few years and can easily
| get help from any young neighbor in case of issues.
|
| It isn't like I'm not trying. For example, for
| communicating with some close fairly tech literate
| friends, we go through a relatively big effort to host
| and maintain our ideal of a self-hosted Matrix and
| Misskey node. But there we can manage it due to everyone
| in the group being able to at least describe the errors
| they run into.
| UniverseHacker wrote:
| I hate social media, and use it as little as possible, but
| can't figure out how to do what you are saying here in
| practice without total social isolation in the real world
| outside of social media.
|
| For example, there are several sports I participate in
| (physically, in real life) but these are organized on
| either Instagram or Facebook. I have created accounts
| solely to access this information (date/time of events).
| Facebook and Instagram are constantly disabling and
| blocking my accounts, apparently because my low engagement
| (zero posts, only "lurking") triggers some sort of bot
| detector algorithm. I have no recourse, and can't contact
| anyone at Meta about this.
|
| I've tried getting these communities to inform me outside
| of facebook/instagram, but it's too big of an ask. These
| mediums work for everyone else except me, and the people
| involved lack the tech savvy or interest in trying to find
| an alternative.
| swayvil wrote:
| You are being punished for not "engaging" enough. Wow. I
| mean I imagined. So it really is a thing.
|
| Like that black mirror episode where you aren't allowed
| to close your eyes when there's a commercial on.
| BlargMcLarg wrote:
| Emphasis 'we', not 'my'. These guys are already making
| shadow profiles out of info given by friends, corporations,
| etc. Not participating is also causing red flags in certain
| circles. Withstanding peer pressure is one thing, having
| your identity made up or flagged out of your control is
| another.
|
| This is a slippery slope that should be tackled _before_ it
| gets to that. The only people not affected indirectly, are
| the people who will die without children or younger cohorts
| as friends.
| root_axis wrote:
| > _These guys are already making shadow profiles out of
| info given by friends, corporations, etc._
|
| True, it's a shady practice indeed, but signing up and
| giving them even more information directly from the
| source is obviously far worse than the fraction of
| signals they can extract from your friends.
|
| > _Not participating is also causing red flags in certain
| circles._
|
| There's no accounting for the peculiarities of social
| groups, you could say the same thing about refusing to
| smoke weed or drink alcohol, it doesn't mean those vices
| are vital, and social media is no different.
| BlargMcLarg wrote:
| I don't think you fully grasp what the slope is sliding
| to. There are enough token anecdotes of companies doing
| background checks on social media, and will actively flag
| individuals for having zero presence. We also have social
| credit score horror stories.
|
| Your answer doesn't work anymore when lack of
| participation is considered wrong. We should be blocking
| that instead of assuming things will just work out
| forever as long as individuals guard their identity.
| Again, this goes beyond just standing up against peer
| pressure.
| root_axis wrote:
| > _There are enough token anecdotes of companies doing
| background checks on social media, and will actively flag
| individuals for having zero presence._
|
| I'm sure it happens, but I don't believe that to be a
| real issue since using the presence of a social media
| account as a filtering tool for hiring is obviously
| ridiculous, and as someone who has done a lot of hiring,
| it's completely absurd to imagine we'd ever turn away a
| good candidate because their name didn't hit on a social
| media search, especially because it's very common for
| people to use nicknames or false names on social media or
| to completely remove their account from search
| altogether.
|
| I also don't see the peer pressure thing as an issue.
| Adults don't meaningfully peer pressure other adults to
| use social media, nobody cares, and kids will peer
| pressure for everything from video games to sex and
| drugs, but it's pretty obvious that being peer pressured
| to do drugs isn't a valid reason to use drugs.
| swayvil wrote:
| You haven't thought about peer pressure enough. It's got
| depths.
|
| "You must do this thing" evolves into "you must support
| this thing" and then into "absence of support is
| equivalent to nonsupport (antisupport... whatever)".
|
| And in social media this evolution is fast.
| BlargMcLarg wrote:
| I'm not sure why you circle back to peer pressure when we
| agree it isn't an issue. Are you reading past the
| comments?
|
| >it's completely absurd to imagine we'd ever turn away a
| good candidate because their name didn't hit on a social
| media search,
|
| Understand for a moment many of these people are not
| developers with well-established CVs. These are normal
| people working the bottom of the ladder where there are
| plenty of replacements, and the answer to being
| irreplaceable is effectively 'start becoming a prodigy,
| establish a network early or be lucky'. Often too late
| for them. Even that advice alone is insane for the yet-
| to-be-born given a virtually global mental health crisis.
|
| Leaving things up to executives behaving in a sane manner
| has given us multiple global problems to deal with. I
| wouldn't count on their sanity to prevent another.
| root_axis wrote:
| > _I 'm not sure why you circle back to peer pressure
| when we agree it isn't an issue._
|
| You're the one bringing it up. You've mentioned peer
| pressure in all of your replies.
|
| > _Understand for a moment many of these people are not
| developers with well-established CVs_
|
| It doesn't matter the industry or the CV, the idea that
| the absence of a social media account factors into hiring
| decisions in any real way doesn't make sense.
| illuminerdy wrote:
| I haven't used Facebook or Twitter in well over a decade.
|
| Nothing is more freeing than not having to put up with
| entitled, whiny idiots who think they are the moral
| authority on pretty much everything in the world.
| Especially since most of them couldn't accurately point to
| country that isn't America on a map.
| nradov wrote:
| "We" don't need to retreat back into smaller forums. Despite
| the huge number of false positives flagged by content review
| systems, they still impact less than 1% of active users. So
| everyone else will continue using it. Facebook is terrible and
| unethical in many ways but it's still the fastest, most
| convenient way to share pictures and updates with friends and
| family scattered across the world. I don't have enough hours in
| my day to pursue other options.
| 650REDHAIR wrote:
| I've flagged 100s of illegal firearms sales on Reddit and FB
| and not a single one has been taken down.
| nradov wrote:
| Perhaps, but so what? At that scale there will be huge
| numbers of both false positives and false negatives in any
| content moderation system. If you dig into any popular
| online classified sales site you can find some illegal
| items.
|
| Criminal activity is quite a different thing than censoring
| legal content which possibly violates corporate terms of
| service. If you have evidence of an actual crime then you
| should report that to law enforcement instead of expecting
| a private for-profit company to handle the incident.
|
| And absent further hard evidence, I am frankly skeptical of
| your claim. Most people aren't experts on the nuances of
| firearms sales laws in various jurisdictions, so a post
| that appears to be soliciting a crime might be entirely
| legal (or vice versa). I don't really use Reddit, but I've
| been on Facebook for years and have never seen a post for
| an illegal firearms sale. Do you at least have some screen
| snapshots?
| tomohawk wrote:
| That's life under the Techiban
| swayvil wrote:
| In Science We Trust.
| hfbff wrote:
| "This is the action of haters - and there are sadly many in our
| society - who seek to damage the film in order to trivialize the
| Holocaust" Newton told Rolling Stone at the Toronto Film
| Festival.
|
| There are a lot of stupid things Facebook is doing, but why, of
| all things, would you accuse them of trivializing the Holocaust?
| If anything, this kind of accusations (using the Holocaust as a
| defense for everything) trivializes the Holocaust.
| himinlomax wrote:
| Why? To force them to respond.
| DoctorOW wrote:
| I think this might be implicitly accusing holocaust deniers of
| mass reporting and spreading misinformation about the content
| of the film so it gets pushed up the reporting queue and a
| Google search returns at least some results that appear to
| verify the false claims. This happens often enough to be
| recognized in marginalized communities.
| blueflow wrote:
| Thats the current zeitgeist. Making extreme accusations,
| devaluing them and training people to not believe them per
| default.
|
| The chaotic evil in me loves this because it opens up many ways
| to get away with bad takes.
| piokoch wrote:
| The problem is that no automatic method we have now will catch
| context. There is a difference between "Jews were murdered by
| Germans during war" and "Jews destroyed German economy before the
| war" that will not be recognized by any machine we have nowadays.
| The first is true, the second is bullshit, how algorithm can now
| this?
|
| More, even if Facebook employees humans to do moderation still
| for some contractor from Asia "Jews destroyed German economy
| before the war" might not be easy to verify. For a contractor
| maybe Jews did that, who cares, I have 20 seconds to moderate
| this and move to other post. The same way if I was asked to
| moderate some historical details about India-Pakistan conflict or
| other historical facts about Asia, Africa I have no knowledge
| about at all.
|
| I was reading quite a lot recently about war in Angola and still
| I have doubts which side was "good" and which was "bad", besides
| that people who lived there were hurt in the history like almost
| no other nation.
|
| Even worst. Some, especially historical, facts are judged from
| different perspective. For instance in Poland Napoleon Bonaparte
| is a mythologized person that brought hope to Polish hearts to
| get back their homeland [1]. From the point of view of someone
| from Austria or Italy, well, Napoleon is considered to be far
| from hero.
|
| We don't have to go back that far in the history. US intervention
| in Afghanistan or Iraq can be seen differently depending on
| somebody views.
|
| How to moderate all this?
|
| [1] Fun fact: not a big surprise that Napoleon didn't give a crap
| about Poland, he even refused to give them a proper King at the
| short time he could (he has chosen some Saxon prince). At the
| end, when Napoleon lost, remains of Polish military units were
| sent to Haiti to help France to maintain their colonies. Many
| Polish soldiers died from tropical illnesses there, many joined
| Haitians as they saw that those people were fighting for their
| freedom like Poles were.
| peteradio wrote:
| Provide services that force people to have skin in the game
| (non-anonymous and/or paid). Don't connect the whole goddamn
| world together, the world is not a melting pot. Let those in
| the network flag/block/defend/vouch so it aligns with whatever
| culture is on that particular network.
| bwb wrote:
| Maybe slight tweak to this "the world is a melting point, it
| just melts very slowly :)"
| Bakary wrote:
| The world is a lava lamp. The blobs coalesce and shift
| around but never for too long in the same spot.
| kranke155 wrote:
| From my (very) limited understanding the Angolan Civil War had
| no good sides.
|
| It was a plain power struggle where two superpowers decided to
| invest their resources to deny the other a base in Africa. The
| local proxies were fine with this since they wanted nothing but
| to kill each other. The result was a bloodbath that went on for
| decades.
| baud147258 wrote:
| > At the end, when Napoleon lost, remains of Polish military
| units were sent to Haiti to help France to maintain their
| colonies.
|
| Just a minor correction, the Polish were sent to Haiti in 1802,
| way before Napoleon had started losing.
| philwelch wrote:
| Yeah, Haiti broke away in a slave rebellion before Napoleon
| sold Louisiana to the United States to raise money for the
| war effort so it wouldn't make any sense for the Polish to be
| sent to Haiti at the end of the war.
| logicalmonster wrote:
| > I was reading quite a lot recently about war in Angola and
| still I have doubts which side was "good" and which was "bad",
| besides that people who lived there were hurt in the history
| like almost no other nation.
|
| Norm MacDonald (I didn't even know he was sick) is said to have
| made the following quote, which is an interesting filter to
| look at all you know about history through.
|
| "It says here in this history book that; luckily, the good guys
| have won every single time. What are the odds?"
|
| > will not be recognized by any machine we have nowadays. The
| first is true, the second is bullshit, how algorithm can now
| this?
|
| Forget algorithms for a second, even humans can't adequately
| judge nuanced issues, particularly issues that they're
| unfamiliar with and lack the context around, and particularly
| with a definitive time limit to work against.
|
| Now think about how social media giants operate. They have
| teams all over the world, say in Bangalore, trying to judge the
| nuanced political arguments of foreigners having discussions
| about their own country's history that they don't know
| intimately. Oh, and they probably have to judge most issues in
| less than 30 seconds or they'll be too slow to keep their job.
|
| It's like asking an average American to intelligently weigh in
| on some complicated political argument around Kashmir with a
| few seconds to read a post and decide who's claim is right.
| It's absolutely ridiculous that this is the moderation standard
| that exists.
| triceratops wrote:
| > It says here in this history book that; luckily, the good
| guys have won every single time.
|
| Is that really true?
|
| The conquest of the Americas is near-universally seen as
| "evil" defeating at least "innocent" if not "good". Leaving
| aside the people who say "The Aztecs had it coming".
|
| The Roman empire did some pretty shitty things, that most
| people would recognize as evil (slavery, Celtic genocide) but
| is still regarded warmly today as the ancestor of modern
| Western society, morality, and culture. That counts as a
| "win".
| troon-lover wrote:
| logicalmonster wrote:
| > Is that really true?
|
| That's sort of the joke. It's another way of saying "the
| victors write the history books"
| triceratops wrote:
| I understand the joke and I know the victors write the
| history books, by virtue of being alive. But they don't
| always make themselves look like the good guys in those
| history books.
| logicalmonster wrote:
| > they don't always make themselves look like the good
| guys in those history books
|
| I think judging past history reasonably from our current
| perspective isn't quite so easy.
|
| From the perspective of today, virtually every human that
| ever lived in the past had views that could be considered
| some kind of racist, sexist, homophobe, religious
| extremist, etc even if they were very decent humans by
| the standards of their day. Even the great abolitionists,
| philosophers, people considered to be saints, or other
| humans that tried to be wholesome in their time likely
| had some views that would be considered totally repugnant
| to many today or committed actions that were considered
| reasonable then, but akin to war crimes now.
|
| From the perspective of 100 or 200 years from now, I'm
| sure everybody living today around 2022 will be
| considered to have committed gross and obvious crimes
| against decent human morality and will be considered to
| have had totally backwards thoughts on something or
| another. I'd hope proper context is taken into account
| when they look back at us, so I think it's fair to try
| and do the same when we judge the past.
| Veserv wrote:
| You are presupposing that for some reason Facebook _must_ do
| this. If they can not moderate then their service is defective.
| The fact that they want to make huge gobs of money does not
| "force" them to offer a defective service, they can just not
| offer it.
|
| If a construction company said: "The only projects we can make
| a profit on are skyscrapers, but we do not know how to make a
| skyscraper without having it fall down and kill everybody in
| it." They are not allowed to build skyscrapers no matter how
| important it is to their bottom line.
| commandlinefan wrote:
| > If they can not moderate then their service is defective
|
| Well wait a minute, couldn't you say the same about the ISPs
| that host websites in the first place? Isn't the standard
| pro-big-tech-censorship position "if you don't like it, make
| your own website"? If Facebook has to moderate content
| (according to any standard) in order to exists, why don't
| hosting providers also have to moderate? (FWIW I'm anti-
| censorship)
| triceratops wrote:
| Because Facebook and the hosting provider are at different
| levels of the networking stack?
| Calvin02 wrote:
| I'm guessing that you didn't hear about Cloudflare and
| Kiwi Farms?
|
| There is no stopping.
| Veserv wrote:
| I am responding to the article and the poster's response in
| context.
|
| The article asserts that Facebook's moderation is
| defective. The person I was responding to presented a
| standard generic argument that is of the form: "Yes, it is
| defective, but the problem is too hard for anybody to
| produce a non-defective solution. Therefore, the provider
| has no choice but to provide a defective service." I am
| arguing that is untrue. If a service is defective, it can
| and should just not be offered.
|
| Note this is entirely contingent on the service being
| defective according to your value system. I have made no
| claim as to whether or not I agree with the specifics here,
| just that the generic conditional argument presented is
| flawed.
| invisible wrote:
| I think the contention is that hosting is a service. Just
| as you are suggesting Facebook is offering a defective
| service (that you said should not be offered), then so
| are hosting providers as they similarly can't moderate
| granularly.
| Calvin02 wrote:
| The argument that you're making is extremely flawed.
|
| It is similar to: car manufactures can't guarantee that
| their cars won't kill people therefore their products are
| flawed and shouldn't be sold. In this case, the user is
| held liable for ensuring that it is safely operated.
|
| By your logic, we would stop building roads or ask car
| manufactures to stop selling cars because people cause
| accidents that kill other people.
|
| The condition "service providers must moderate content
| and adjudicate disputes" is what's flawed.
| Veserv wrote:
| No, the argument I am making is:
|
| The Rolling Stone thinks Facebook is providing a
| defective service (i.e. a service that is net harmful).
| If you agree with that contention, then you should also
| agree that Facebook should not offer that service. The
| comment I was responding to was making the generic
| argument that: "The problem is too hard. Nobody can make
| a non-defective solution. Therefore, the provider has no
| choice except to provide a net harmful service." That is
| a flawed argument.
|
| You may also disagree with the premise: "Facebook is
| providing a net harmful service", but that is independent
| of the invalidity of the argument presented which assumed
| it was providing a net harmful service, but they should
| be allowed to do so anyways due to the argument
| presented.
| shrimp_emoji wrote:
| Easy: stop moderating. Just allow everything.
| etchalon wrote:
| "Crime sure is hard to stop."
|
| "Well, just make everything legal."
| origin_path wrote:
| Well, but that is literally one of the arguments for
| legalizing drugs.
|
| The issue here is more like defining what crime is, though,
| not stopping it. Stopping stuff on FB is easy. Figuring out
| what to stop, is hard. Figuring out if a disputed case
| should have been stopped is hard. For the 'real world' we
| have parliaments and courts, but there's FB has only the
| equivalent of police, not the other parts of the system. It
| is, in effect, a police state.
| etchalon wrote:
| Yes, some things are hard and impossible to get right
| 100% of the time.
| xdennis wrote:
| > "Crime sure is hard to stop."
|
| > "Well, just make everything legal."
|
| It's a bit of a straw man to say that GP meant we shouldn't
| enforce the law in the real world.
|
| GP wasn't very clear what he meant, but presumably he's
| referring to not doing excessive moderating and instead
| rely on the law mandates.
|
| More ~~laws~~rules, less justice. When you have a lot of
| internal policies, you're inevitably going to have
| ridiculous results such as this one. If you only follow the
| legal rules (which you have to) there's less unfairness.
|
| (Of course, you have to have some internal policies such as
| not allowing spam, but the point is: the fewer onerous
| rules, the better.)
| neodymiumphish wrote:
| What about only stopping (moderating) crime, then?
| kbelder wrote:
| Allow legal things. Block illegal things.
|
| It seems crazy that it's _impossible_ to find a major
| platform that has this policy.
| etchalon wrote:
| Pornography is legal.
|
| If Facebook allowed pornography, it would quickly
| overwhelm the platform due to engagement metrics.
|
| It would make the platform unusable.
|
| Vile hate speech? Completely legal.
|
| But its mere presence would turn away huge numbers of
| users.
|
| It would make the platform unattractive and hurt the
| business.
|
| It is in the platform's best interest to block otherwise
| legal things.
| fallingknife wrote:
| But this isn't true. Social media was pretty much a free
| for all (except for porn) until 2015ish and it was
| growing rapidly the whole time. The whole argument that
| this type of content will drive away users is completely
| contradicted by history.
| etchalon wrote:
| Social media was not a free for all before 2015. I ...
| don't know why you think that is true.
|
| This story is from 2013:
| https://www.pbs.org/newshour/classroom/2013/06/facebook-
| and-...
|
| Maybe super early it was a free for all, but it had a lot
| less users then. The impacts mattered less.
|
| These interventions became important as the companies
| grew, and needed to attract the largest possible number
| of users.
|
| There are places with VERY open content policies. You can
| join them today.
|
| Those places attract a niche audience and I'd wager
| always will.
| xdennis wrote:
| > If Facebook allowed pornography, it would quickly
| overwhelm the platform due to engagement metrics.
|
| You're arguing with hypotheticals even through real world
| examples exist.
|
| Reddit has lots of porn and it's nowhere to be seen on
| the frontpage.
|
| Allowing something doesn't mean you shouldn't classify it
| and filter it.
| rat87 wrote:
| Seems like a good way to may Facebook much much worse
|
| It's not hard to see why major platform has such a policy
|
| Also I don't want people to get blocked for pirating
| stuff. Or weed.
| [deleted]
| Juliate wrote:
| The obvious legal response is/should be: don't make an
| algorithm take precedence if it is _that_ broken (incorrect,
| ineffective and unfair).
|
| Even from a product point of view, that's basic: your product
| feature doesn't pass quality, you don't ship it.
| swayvil wrote:
| It's gotta be crowdsourced. Like reddit/hn voting. But smarter.
|
| Like voting weights the value of peer's votes and such.
|
| I think it's the only way.
|
| Otoh, the true Lord of the Flies might emerge that way. Maybe
| democracy is inherently flawed. I dunno. Experiments are called
| for.
|
| How do we test social media designs?
| htrp wrote:
| Compositionality issues in AI
| Beltalowda wrote:
| Furthermore, you can publish "The effects of the Jewish
| population on the 1930s German economy" which can be either a
| genuine bona-fide analysis, or something which essentially
| boils down to "Jews destroyed German economy before the war".
|
| I think the "reddit model" where you have smaller communities
| with community mods works much better than the Facebook or
| Twitter model where there's one "global community". Not that
| reddit's moderation is perfect or that you can 100% rely on
| community mods, but overall, it seems to work much better.
| RajT88 wrote:
| > Furthermore, you can publish "The effects of the Jewish
| population on the 1930s German economy" which can be either a
| genuine bona-fide analysis, or something which essentially
| boils down to "Jews destroyed German economy before the war".
|
| When I was in college, I was looking around an FTP server and
| found a holocaust denier book. So I read it. Well, more
| moderately skimmed (not lightly).
|
| It was exactly like this - it purported to be a sober view of
| history, well cited and no name calling. Literally none of
| the references were to anything real; it was all fabricated
| bullshit trying to push the reader to a particular conclusion
| (Jews are bad).
|
| Not being able to tell the difference is entirely the point.
| These bastards are sneaky.
| illuminerdy wrote:
| > "...it purported to be a sober view of history, well
| cited and no name calling. Literally none of the references
| were to anything real; it was all fabricated bullshit
| trying to push the reader to a particular conclusion (Jews
| are bad)."
|
| But the problem is that censorship assumes that you are too
| stupid to come to that conclusion yourself and must be
| protected from the "misinformation".
|
| It also robs the marketplace of the ability to hear the
| legitimate criticisms and the opportunities to expose said
| bullshit.
| RajT88 wrote:
| Companies have the right to handle this issue however
| they wish. Generations of politicians have ensured it, at
| least in the US. That's just how it is.
|
| With regards to:
|
| > But the problem is that censorship assumes that you are
| too stupid to come to that conclusion yourself and must
| be protected from the "misinformation".
|
| > It also robs the marketplace of the ability to hear the
| legitimate criticisms and the opportunities to expose
| said bullshit.
|
| Do you think the public on the first count, and the
| marketplace on the second count are doing a particularly
| admirable job here? Because, I don't. And that failure
| comes in no small part because of other vested interests
| who prop up said bullshit, because they see an
| opportunity to profit and gain more influence from it.
| How do you propose we address _that_?
| rat87 wrote:
| Reddit literally had /r/Holocaust controlled Nazis who would
| post Holocaust denial on it for years until reddit for too
| emberassed
| eli_gottlieb wrote:
| Seconding reddit being very extremely, explicitly
| antisemitic. It's really just /pol/ with slightly bigger
| words much of the time.
| arcbyte wrote:
| You miss his point completely. Whatever problems "reddit"
| has, they're limited to small communities. As much as I
| think reddit has a huge left bias, there are huge, huge
| numbers of right leaning communities as well.
|
| I don't read /pol, /publicfreakout or any of these other
| communities and that means I am completely unaffected by
| whatever nonsense they have going on.
| fnovd wrote:
| Sure, minority issues are often limited to minority
| communities. I don't read r/publicfreakout either, but as
| a moderator of r/Jewish I can see the impact it and many
| other subs have on our community. You have your standard
| malicious crossposting and trolls, which we have good
| enough ways to deal with. Antisemitism from other subs
| leaks and grows and we often get brigades of
| intactivists, conspiracy theorists, BHI-sympathizers, you
| name it. Reddit's new Crowd Control system helps but it's
| not perfect. Good luck if anything happens in Israel
| (which it frequently does), you may as well just shut the
| sub down for a day.
|
| Reddit shuts down other kinds of hate, the double
| standard is glaring. The fact that it doesn't impact you
| personally is so not the point.
| nradov wrote:
| The Reddit model allows paid foreign agents to become
| volunteer community (subreddit) moderators and then use that
| platform to sow division or push a biased narrative. How much
| do you think the Chinese government would pay to subtly
| emphasize or de-emphasize certain stories on a huge community
| like r/news or r/politics?
| bitxbitxbitcoin wrote:
| Ten cents.
| jgmmo wrote:
| Reddit has a very bad antisemitism problem. It's a cesspool.
|
| By no means is reddit 'the model'. In practically any sub
| except the explicitly Jewish ones, I will find an avalanche
| of antisemitism on any post that touches on
| Judaism/Israel/Jews.
| illuminerdy wrote:
| Reddit has a bad anti- _everything_ problem.
|
| That place is a cancer.
| fnovd wrote:
| It's bad now, and only getting worse. The moderators of top
| subreddits (like PublicFreakout) are openly in favor of
| marginalizing Jews, they'll just use the word Zionist
| instead. You can report a comment like "Jews don't deserve
| to live" and Reddit will automatically respond within a few
| hours saying the comment didn't violate their content
| policy. You can visit the subreddit AntisemitismInReddit
| for hundreds more examples.
| philwelch wrote:
| If anything you're understating the problem.
|
| The problem with "Jews destroyed German economy before the war"
| is that it's extremely vague and difficult to verify. There's
| no good basis for the claim, but it's not even a historical
| detail that could be easily verified; it's more of an
| overarching theoretical opinion.
|
| As for "Jews were murdered by Germans during the war", sure.
| That's an extremely well documented fact. They were also
| murdered by Romanians, Lithuanians, Bulgarians, Hungarians,
| Ukrainians, and other collaborators, but in context, we know
| that the Germans were organizing the whole thing. We also know
| that there's a different context today in 2022 where some
| people might want to emphasize and others might want to
| minimize the complicity of Ukrainian collaborators.
| lurquer wrote:
| >> As for "Jews were murdered by Germans during the war",
| sure. That's an extremely well documented fact.
|
| It's sad that you're blind to the fact that that statement is
| just as 'racist' and untrue as the original.
|
| Jews were not murdered by Germans.
|
| Rather, Some Jews were murderer by Some members of a
| political party that was primarily -- but not exclusively --
| German.
|
| The overwhelming majority of Germans never murdered anyone.
| end_of_line wrote:
| Surely, nobody in Dachau knew what what have been happening
| in one of the very first nearby concentration camp on the
| daily basis. It originated few years before the second
| world war. Buchenwald, matthausen, gross rosen ( typical
| work camp but not less lethal than the concentration camp -
| it was German before 2 world War), all German, on the
| German soil constructed by the German people
| yamtaddle wrote:
| You're attacking a strawman. Not only is a generous reading
| both entirely true _and even compatible with your
| "corrections"_, the poster went on to add nuance such that
| it doesn't require any generosity whatsoever to read it
| that way.
| eqdw wrote:
| > The problem is that no automatic method we have now will
| catch context.
|
| I disagree. The problem is that nobody is willing to be
| realistic about the limitations of automated moderation and
| proceed accordingly.
|
| If we can't create an automatic method that catches context,
| the solution isn't to bemoan that AI can't magically do what we
| want. The solution is to remove the rules that require AI to
| understand context in the first place, because it is
| fundamentally outside of our technical ability, and any attempt
| to achieve it will fail.
|
| The problem is people who think that context-based censorship
| is reasonable for a massive platform. It simply is not. It is
| reasonable at an individual level. It is reasonable at an
| interpersonal level. It's even reasonable at a small-group
| level, where specific individual human beings who are invested
| in the community can be aware of these context issues.
|
| It is not reasonable at Facebook scale, full stop. Facebook
| should not be in the business of deciding to ban things like
| this. That is a responsibility that belongs at a lower level.
| What does that look like in practice?
|
| If an individual posted it on their wall:
|
| * That individual uses their judgement and chooses to post it
| or not
|
| * The people who see it use their judgement and click the block
| button if they don't like it
|
| If an individual posted it in a small group:
|
| * The group can socially police such actions by commenting that
| they are upset by it
|
| * The group's administrators can privately reach out to the
| person who posted it, explain that they can't post such things
| in that group, explain why, and explain what actions they could
| take to remain in good graces
|
| * The group's administration can make a judgement call and
| remove the post, not on the basis of crude keyword detection,
| but on the basis of human understanding
|
| If an individual posted in a large group:
|
| * The large group can adopt clear and unambiguous rules that do
| not require context to administer, and enforce them accordingly
| on a case-by-case basis
|
| * The large group can pre-commit to not dealing with such
| issues, and require their members to deal with it privately,
| like human beings
|
| Trying to automate this process will always fail, and it will
| cause massive false-positive and false-negative issues as it
| does so. Engineers used to understand these concepts when I
| first entered industry 20 years ago. It's very disappointing to
| me that they either can't or won't now.
| chrisbrandow wrote:
| Bizarre that they stuck with the ban after an appeal. Seems like
| a pretty obvious thing to fix
| Pulcinella wrote:
| This is basically the ad for the movie now. I've never heard of
| in until now, but now I'm interested.
|
| "Come see the anti-Nazi movie Facebook doesn't want you to know
| about!"
| Aulig wrote:
| My ad account got randomly shut down a few days ago too. Was
| reapproved after review but never was I given a reason.
|
| I hate that big companies can get away with this.
| mola wrote:
| I sympathize with your frustration.
|
| I do wonder though... a lot of the ability for these companies
| to even exist at this scale is because they use a small amount
| of ppl in the loop.
|
| So was it worth it to let these behemoths prosper so we can get
| these services for a low price, but suffer these sort of
| consequences?
| cmrdporcupine wrote:
| It's not worth it and there needs to be a real alternative.
| Unfortunately at the scale things have gotten to, it might
| take regulatory action. Because the market is not going to be
| able to compete here.
| fullshark wrote:
| These type of editorial / advertising approval decisions happened
| all the time before the internet took over media. Just FB/Twitter
| gets crucified when they do it because they pretend to not be
| publishers but utilities.
|
| Time to admit all these hosting sites are just publishers that
| use ML models as editors and their users as contributors and
| section 230 needs to be re-written to account for it.
| darthrupert wrote:
| Recently Farcebook has started recommending incredibly toxic
| anti-SJW shit to me for no apparent reason. Either they turned
| the "let's incite toxicity" knob to 11 or somebody is seriously
| abusing them.
| AlexandrB wrote:
| Indeed. By US standards I'm somewhere left of Bernie but I keep
| getting recommended Jordan Peterson reels on Instagram. Not
| sure what's going on.
| 20amxn20 wrote:
| cauefcr wrote:
| Ah yes, the toxicity of "let people be themselves in peace".
| ben_w wrote:
| While I have _never_ directly encountered toxic social
| justice activism (not even an ex 's mother whose activism
| made me realise "Champagne socialist" was more than just a
| right wing straw-man), there is no cause so pure it cannot
| attract numpties.
|
| There is a video (don't know if it's real, staged, missing
| context, a one-off, whatever) of of some activists going to
| a restaurant and apparently cajoling diners to agree with
| them by using the slogan "silence is violence".
|
| (Trouble is with stuff like that, in any cause, it gets
| amplified by both toxic opponents and socially inept
| supporters).
| dekhn wrote:
| Just yesterday, Facebook started including in-stream ads for
| Creation Research. I'm a scientist who's work on evolution. i
| also see several other ridiculous ads that have no relevance to
| me.
|
| I think it's more likely the folks running the product machine
| learning recommendation engines are asleep at the wheel; after
| all, Mark lost interest in his core product to promote AR, so
| why would th efolks running the core product care?
| origin_path wrote:
| Eh, that seems pretty relevant: the ads are about evolution,
| you work on evolution related stuff. It's just the ad
| targeting engine doesn't understand that you're going to
| fundamentally disagree with the concept being advertised.
| That's probably an edge case though. 99% of the time
| irrelevance for advertising means you just have no need for
| the product being advertised, not that it's the polar
| opposite of your worldview. You probably don't remember the
| ones that are merely useless, though.
| dekhn wrote:
| I do expect that recommendation engines should pick up on,
| and not show me, advertising content that is fundamentally
| nonscientific (and show those ads to other folks whose
| profiles are more consistent). It demonstrates that the
| recommendation algorithm can't differentiate between two
| clusters that use similar words, but different concept
| vectors.
| origin_path wrote:
| Well, you don't know how it was targeted. Maybe they
| don't want to sing to the choir so such ads might be
| targeted at people who are interested in evolution. Still
| it seems unlikely that the scientific-ness of something
| can be determined by an AI model at all, let alone just
| word vectors. What is and isn't science can be hard to
| rigorously pin down, that's why pseudoscience is
| problematic in the first place.
| dekhn wrote:
| These are all reasonable points, but since I have lots of
| experience building recommendation systems at places like
| google, I have a pretty good understanding of what the
| embeddings are capable of learning (even so, Google News
| still does the same thing occasionally).
| hugh-avherald wrote:
| For me it's mostly Elon Musk-endorsed, government-backed
| $250k/yr guaranteed return investments in bitcoin.
| ben_w wrote:
| Possibly, but not the only explanations -- my feed is now more
| than 50% suggested/promoted content, including Fox News Tampa,
| even though I'm living in a different city in a different state
| in a different country in a different continent and almost all
| of my friends are further to the left than even the left-most
| US Democrat.
|
| If I had to guess, the bottom has fallen out of the advertising
| market and every business reliant on selling adverts is getting
| desperate.
| somat wrote:
| It reminds me of stores, you can get a feel for how well the
| store is doing by how aggressively they try to shovel the
| loyalty card on you.
|
| See also: magazines, watch the signal to noise ratio plummet
| as they try to prop up revenue with more and lower quality
| ads.
|
| I get why they do it, they are in a death spiral, desperately
| trying to find that one thing that will save them, mostly I
| think it just hastens the end as the bad experience turns
| people away.
| Bakary wrote:
| I always wondered why physical shops offered a much worse
| experience in many ways than online ones, even taking into
| account the obviously insurmountable logistical advantage
| of the latter. I mean you'd expect them to at least try to
| have better service?
|
| The answer of course is that their only choice is to focus
| on their captive audience (people who can't or won't buy
| online) and extract as much as possible until the music
| stops.
|
| A similar thing is ironically happening to Netflix
| noneeeed wrote:
| The FB feed has become utterly drenched in suggested content,
| 99% of which is trash.
|
| I'm sure they will see some short term bump but I've got to
| wonder if this is finally the end for them. I'm certainly not
| bothering to check it any more.
| Bakary wrote:
| Facebook still has billions of users. It's just that its
| hope lies outside of the West
| lupire wrote:
| The money isn't outside the West...yet.
| dreen wrote:
| I wonder if marketers could start exploiting this. Get something
| obviously innocent banned in a stupid way by algo, then use the
| outrage effect for promotion. They already kinda did this with
| people destroying stuff of companies that made some statements,
| this seems like the next step.
| mjburgess wrote:
| Ghostbuster's 2016's supposed "anti-feminism " campaign is an
| example of this. It's very common and has been forever, since
| at least newspapers.
| SamBam wrote:
| Is there evidence that the misogyny was faked by the movie
| industry?
|
| Or are we just assuming that all such reactions (e.g. the
| anti-black Ariel folks) are faked to generate support?
| mjburgess wrote:
| The reactions arent necessarily fake, its much more that
| they hunt out and push those (very small number) for the
| sake of publicity; and to offset and undermine the
| credibility of critics.
|
| No one really hears any attacks against GB 2016's critics
| now, but before release, they were all tantamount to
| racists and sexists.
| dogleash wrote:
| > are faked to generate support?
|
| No need to fake it. Just skew your presentation of reality
| to fit your narrative.
|
| https://youtu.be/UWROBiX1eSc?t=193
| yuan43 wrote:
| I can only hope that Facebook follows this policy to the letter
| now and into the future. In fact, it would be a gift to humanity
| to widen the scope to any an all content deemed offensive to
| anyone. Ban it all.
|
| Nothing will hasten the downfall of the monstrosity that Facebook
| has become faster that the strictest possible adherence to and
| advancement of this policy.
| liampulles wrote:
| Why not put the power in the hands of the users? If a person does
| not want to see a film that deals with race (generally), let them
| go and flip a switch in their settings to hide these from their
| view (and similar for whatever other subject may be of potential
| concern).
| lupire wrote:
| The issue has (almost) nothing to do with the film content. The
| issue is with the verbiage in the title, used in the ad. And
| it's not about who wants to see the content, it's about
| censoring potential race warriors.
| ThrowawayTestr wrote:
| The real issue here isn't that an algorithm flagged it, but that
| a "human" reviewed it and upheld the ban. Either a human didn't
| actually review the film or there's a serious lack of training.
| [deleted]
| cmrdporcupine wrote:
| I think something has happened behind the scenes at Facebook
| where there are actually not really humans doing the secondary
| "review" or appeals process.
|
| See my other comment on this article for another (less
| important) example. I'm guessing they're simply passing at
| least some % of them through a secondary automatic
| classification system.
|
| Why would you let fairness get in the way of revenues?
| tlogan wrote:
| The biggest problem with this AI approach is that actual bad ads
| (scam, spreading hate, etc.) are getting thru.
| job_suche wrote:
| I mean, that is also the first thought I had when I read the
| title, even before delving into the article. If instead of
| "beautiful blue eyes" it was "silky smooth pale white skin",
| would it then be different?
|
| For once I think Facebook is right. Poor choice of a film title,
| that, especially considering the theme of the film.
| ReptileMan wrote:
| There were lots of people with beautiful blue eyes and silky
| smooth pale white skin that perished in the Holocaust...
|
| Eyeballing the difference between Ashkenazi and a regular
| person from Central or Eastern Europe is mission impossible.
| job_suche wrote:
| Why are blue eyes in particular so beautiful? Is that also
| true of blond hair and fair skin?
|
| These white beauty standards have the same racist origins as
| the racism that is the main theme of this film. In any other
| context it would be innocent enough and of course everyone is
| allowed to have their own personal opinions on beauty, but in
| this particular context, it is imho in poor taste.
|
| Probably the director did not think of it in this way, and I
| do not fault him, but it can be interpreted in this way.
| lupire wrote:
| This is literally the core point of the damn movie. I
| assure you that the director was aware.
| mynameisvlad wrote:
| > There were lots of people with beautiful blue eyes and
| silky smooth pale white skin that perished in the
| Holocaust...
|
| Like, for instance, the person the film is about. Who had
| blue eyes and was killed in the Holocaust.
| etchalon wrote:
| I assume the title is making a deliberate point.
| prvc wrote:
| Title (and headline of TFA) is misleading, as Facebook didn't ban
| a film, it rejected a user from using its ads program based on
| the film's title. Will be interesting to see if the lawsuit goes
| anywhere.
| mynameisvlad wrote:
| It banned a film from its ad program. And the user who tried to
| advertise it. And the composer of the title track.
|
| I mean, they're not banned from Facebook, but the title didn't
| say that either. It said that the company Facebook banned a
| film. Which it did.
| lupire wrote:
| They can presumably advertise the film under a different
| title, with a clickthrough to info with the proper title.
|
| Facebook should have better credibility vetting, for things
| like movies distributed by recognized distributors with a
| good reputation.
| prvc wrote:
| Interestingly, according to the IMDB, the "blue eyes" title
| is already the second one used for the film. Why they'd
| choose something so dicey by Facebook's standards, when
| Facebook ads were supposedly such a crucial part of their
| business plan is unclear.
| BrainVirus wrote:
| This is not an exception. This is the norm. All major social
| media platforms operate under the assumption that it's better to
| ban 100 innocent people than to let one "bad" person publish
| something. The scale of censorship is ming-boggling. The scale of
| denial and ignorance about censorship on HN is even more
| astounding.
|
| The big lie of online censorship is that controversial cases
| where most people think the person "deserves" to be banned are
| unrelated and totally separate from cases like this one, where
| it's obvious the ban is preposterous to anyone possessing common
| sense. _They are directly related._ They are created by the same
| systems built under the same assumptions with the same mentality.
|
| This is not going to be fixed by "better" algorithms, because
| it's not an issue with the quality of the algorithms in the firs
| place. The algorithms _seem_ low-quality to you because you 're
| judging them by a standard the company running them didn't use.
| yandrypozo wrote:
| > The scale of denial and ignorance about censorship on HN is
| even more astounding. After reading some of the comments here
| you're absolutely right :(
| jimbob45 wrote:
| Use two line breaks to put your content on a new line. If you
| use one, it will all be under the same line like happened to
| your comment here.
| commandlinefan wrote:
| > The scale of denial and ignorance about censorship on HN
|
| Even here, where you would think people would know better, you
| still see people insist that it's not censorship because it's
| not the government doing it.
| tl wrote:
| Facebook _is_ the government. PRISM [1] makes it explicit in
| Facebook 's case, but any corporation beholden to a
| government for its continued operation is a policy arm of
| that government.
|
| [1]: https://en.wikipedia.org/wiki/PRISM
| semiquaver wrote:
| Just curious, can you name some large companies for which
| this doesn't hold? Or are you saying that all corporations
| are arms of the government?
| godelski wrote:
| It is the inverse Blackstone Ratio!
|
| But I think we must also look why they end up this way. My
| thoughts are that a small portion of the public reacts so
| strongly and loudly to any minor mistake that it turns into
| national conversations. Ironically through their very platforms
| and algorithms that optimize for engagement (fighting). So we
| then paint these small populations as representative
| populations and make mountains out of mole hills.
|
| I actually do believe a better algorithm would alleviate some
| of the issues. But I do agree that it is not a cure-all. The
| problem appears to be quite complex and many aspects are driven
| from or coupled with factors outside the control of social
| media platforms.
| origin_path wrote:
| A small portion of the public will react to almost anything
| in any way. The censorship regime here must surely have been
| created by more complex factors, like maybe:
|
| - Dependence on advertising, exposure to advertisers who feel
| like their brands appearing next to anything controversial or
| upsetting will negative-halo upon their brand.
|
| - Ideological uniformity amongst journalists, who highlight
| certain kinds of outrage and sink others.
|
| - A need for moral validation amongst tech company employees.
|
| And we could think of many others. Trying to distill a root
| cause is hard but it looks like the everything-is-connected-
| to-everything mentality appears frequently. Is it really the
| case that an advert appearing next to something objectionable
| makes people think less of the brand? Probably not but it
| seems to be a common belief. Is it really the case that
| Facebook is to blame for any video posted on its platform?
| Probably not but it's a common belief. In a thread just a few
| days ago there was a former Twitter employee arguing that
| Facebook was somehow complicit or at fault in the Rohingya
| genocide, which is a good example of this mentality.
|
| You could go even deeper and ask, is this everyone-is-
| culpable-for-everything mentality a genuine belief, or is it
| a possibly sub-conscious cover for some other agenda? That
| is, this argument seems to work on people sometimes, so it is
| deployed a useful tool to advance one ideology or another?
| Who can really say.
| fallingknife wrote:
| The advertising point is a red herring. Adtech companies
| have gotten very good at targeting exactly what content
| their adds will an will not appear on. When they censor it
| is because they do not want you to see it.
| matrix_overload wrote:
| The platforms do it because of armies of miserable people
| online looking for an outlet to their outrage.
|
| So the platforms are stuck trying to appease the angry mob,
| while spending as little resources on it as possible (hence,
| shitty algorithms).
|
| And I think I even know where the mob came from. Humans
| apparently have an intrinsic need to have some goals to be
| passionate about. And since the "economies of scale" have
| optimized away individual decision-making as outlets for
| passion, we see a surge of the good-old tribal instincts.
|
| Cancel culture is certainly a progress from stabbing and eating
| the members of the competing tribe, it is still a manifestation
| of the same kind of instinct that will bring nothing good until
| people find (or, likely, create) a bigger problem to worry
| about.
| vegetablepotpie wrote:
| To me, what you talk about with "economies of scale" is
| similar to the depravation of access to the _power process_
| people experience in industrial society that Ted Kaczynski
| described in _Industrial Society and its Future_. The work
| that our species did to stay alive, such as find food, is
| already handled by society, and the ability to influence the
| direction our lives, has been exported to corporate board
| rooms and legislative assemblies. As a result people now
| engage in _surrogate activities_ to satisfy their needs to
| engage in the power process. This includes things like
| entertainment media, subcultures, and sports. Social media is
| another instantiation of surrogate activities. I think it's
| no wonder that moderation is not benefiting the general
| public.
| nobody9999 wrote:
| >So the platforms are stuck trying to appease the angry mob,
| while spending as little resources on it as possible (hence,
| shitty algorithms).
|
| Actually, I'd posit that the platforms are drooling with
| anticipation and glee at being able to monetize that angry
| mob. Anger, fear and outrage boosts engagement after all.
|
| And that's what platforms want, because increased engagement
| means increased ad revenue. And that's not exactly breaking
| news either.
| drewcoo wrote:
| > armies of miserable people online looking for an outlet to
| their outrage
|
| I just call them "lawyers."
| atchoo wrote:
| There is more to it than that.
|
| Look up the use of Facebook in the genocide of Rohingya in
| Myanmar. I see multiple comments in this thread about
| snowflakes and cancel culture but they miss that social media
| is a profoundly powerful instigator of race hate and
| violence.
|
| This is no defence of Facebook's actions here but any
| suggestion that a hands off approach without policing racial
| language needs to be conscious of the harm it has already
| lead to.
|
| https://www.nytimes.com/2018/10/15/technology/myanmar-
| facebo...
| gretch wrote:
| Paywalled, so I can't directly address the points in the
| article.
|
| Do humans not have thousands of years of genocide each
| other long before Facebook? It's hard for me to believe
| that if only Facebook did not exist, everything would be
| fine in Myanmar. No, almost certainly they would have found
| a way to kill the other tribe.
| Spivak wrote:
| > The platforms do it because of armies of miserable people
| online looking for an outlet to their outrage.
|
| I find this funny because I initially assumed you were
| talking about the seemingly infinite number of people posting
| the "straddle the line like it's a Hitachi wand" flamebait
| that typically gets axed. There's no way to make everyone
| happy, but I think there's a chance we can broadly agree that
| there's very little harm done by being overzealous when it
| comes to using the banhammer on outrage porn. I mean HN
| basically does the same thing but on a smaller scale.
| nomel wrote:
| > trying to appease the angry mob
|
| I think this is half correct. They do it to prevent the angry
| mob from reaching out to advertisers. I don't think they care
| about angry mobs, themselves.
| cies wrote:
| > All major social media platforms operate under the assumption
| that it's better to ban 100 innocent people than to let one
| "bad" person publish something.
|
| Unless they pay for their publications (aka ads), in that case
| FB has a history of letting the most horrible shit slip
| "regrettably through their screening team". Bollocks.
| classified wrote:
| Did you even read TFA? This is about payed advertising for
| the film on FB.
| etchalon wrote:
| This will get overturned shortly due to press attention.
|
| Which, sadly, is the only way large scale network moderation can
| work.
|
| Get it "mostly" right, but often wrong.
|
| In most of the cases you get it wrong, only a few people will
| notice. You'll never hear about it.
|
| Occasionally, you'll get it so wrong a lot of people notice and
| you will hear about it. Then you can fix it.
|
| Rinse and repeat.
| waffletower wrote:
| Facebook has an explicit policy regarding Holocaust denial
| content: https://about.fb.com/news/2020/10/removing-holocaust-
| denial-... Given their human-reviewed decision to "permanently
| restrict" the advertising of "Beautiful Blue Eyes", this policy
| appears to be a disingenuous public relations stance. As director
| Joshua Newton contends, Facebook's tunnel vision adherence to
| their keyword flagging algorithm acts as a significant agent for
| the Holocaust denial movement.
| LatteLazy wrote:
| First we demanded they censor things. Then we got upset they
| censored things.
|
| Turns out, there isn't one global human standard on what is
| worthy of being censored. And this is one reason people only
| think they support censorship, when they really don't...
| NayamAmarshe wrote:
| Facebook and Censorship are synonymous
| 20amxn20 wrote:
| obayesshelton wrote:
| This is because Facebook is a Publisher. They will never admit it
| but FB/META can decide what users can see. This is the what a
| Publisher does.
|
| The issue we have is that if you let the users decide what is
| shown on any platform it would be quite a mess.
| nova22033 wrote:
| https://www.techdirt.com/2020/06/23/hello-youve-been-referre...
| criddell wrote:
| I think you are responding to something the GP never said or
| implied.
|
| Meta wants you to think you are in charge of what you see but
| it isn't true and hasn't been for a while now. The algorithm
| decides. It acts as an editor pulling together a site
| tailored to whatever it thinks will maximize Meta revenue.
| skizm wrote:
| Devil's advocate here (I won't comment on if I agree with the
| argument or not), but this article seems to miss the point of
| the section 230 debate. All of the stuff here is about what
| the law is now. The objection most people have is that it
| shouldn't be like this and we need to change or remove
| section 230. It specifically allows sites to be biased while
| also not holding them legally accountable for anything they
| choose not to remove. Once a social media site hits a certain
| scale, they can completely control any narrative they want.
| Should this be allowed is the real question (imo).
| nradov wrote:
| Supreme Court Justice Clarence Thomas has suggested
| extending common carrier laws to cover social media. This
| would essentially prevent them from censoring any legal
| content, much like legacy telephone companies can't block
| users from discussing certain topics. Such a change would
| require an Act of Congress to amend or replace the
| Communications Decency Act. And there are potential First
| Amendment concerns in terms of forced speech.
|
| https://www.npr.org/2021/04/05/984440891/justice-clarence-
| th...
|
| So it's an interesting policy idea but I'm not sure whether
| it would be better or worse than the current state.
| rocket_surgeron wrote:
| >So it's an interesting policy idea but I'm not sure
| whether it would be better or worse than the current
| state.
|
| It would be so much worse, unimaginably, mindbogglingly
| worse it is inconceivable that any rational human being
| would even consider it for a fraction of a femtosecond.
|
| Which is why conservatives are behind it.
|
| Telcos are subject to regulation because they use public
| resources (land, spectrum, infrastructure) and there is
| no association of their identity with the endpoints of
| the carried content. You don't know, and it is often
| impossible to know, the identity of all of the businesses
| that are transmitting your data from end-to-end so you
| cannot form individual relationships with them.
|
| Also, you often have no choice in what telco provider
| services you as an end-user due to monopoly grants. You
| never don't have a choice of an alternative to Facebook
| or Twitter.
|
| Telcos are the privately-run roads over public lands that
| take you from business to business.
|
| What the conservatives want to do is turn businesses into
| roads, because they are mad that their bigoted messages
| keep getting moderated.
|
| Never mind the fact that "social media" has a definition
| so broad that it could encompass any entity that solicits
| or receives content from any third party.
|
| Anyone who doesn't believe that designating social media
| sites as "common carriers" is going to eventually lead to
| a newspaper's website that invites reader's comments,
| letters to the editor, and op-ed submissions (sOcIaL
| mEdIa, BrAh!) being forced to publish articles the
| editors disagree with is an idiot.
|
| If you run a website, you and only you get to choose what
| is hosted on it. You have all of the control. It can be
| arbitrary, capricious, unfair, illogical, hateful,
| bigoted, inclusive, exclusive, or any combination of
| those. The rules don't have to be published. You are the
| rules. It is your website.
|
| It doesn't matter how big it gets, if it is Facebook, or
| the ACLU, or the KKK, or a personal blog about rock
| collecting.
|
| "But Facebook is the NEW public square" is bullshit. It
| is not, has never been, and will never be "a public
| square".
| kcplate wrote:
| > to miss the point of the section 230 debate. All of the
| stuff here is about what the law is now
|
| This is something that kind of irritates me about Mike
| Masnick (who I generally enjoy reading). He always seems so
| focused on the current state of the law that he seems blind
| to the debate.
| scohesc wrote:
| I tend to agree with you - getting rid of section 230 or
| limiting it to certain types of content providers based on
| some certain metric would be ideal.
|
| I've been doing some thinking and it seems to me like
| Section 230 being repealed would be disastrous for smaller
| websites/startups/content creators.
|
| If I create a small-scale forum - I am now legally
| responsible for what is posted on that platform and can be
| sued repeatedly into the ground until I'm not able to
| continue running my business/forum/whatever. If someone has
| the capital to do that and a mission to remove me from the
| internet, then they're able to.
|
| With section 230, it has that "monkey's paw" style penalty.
| Sure, it would be nice to go after social media companies
| that continuously abuse their power and act on behalf of
| the government to control what you can say, when you can
| say it, but then you'd also potentially open up a bunch of
| legal trouble for smaller outlets.
|
| If there was a way to enact something where above a certain
| threshold, section 230 no longer applies to your company
| and you need to be responsible for the content on your
| platform - maybe that would be ideal, but I don't know what
| that threshold would be - profits, incorporated vs LLC vs
| sole proprietorship, I'm not sure.
| nradov wrote:
| So what? "Publisher" isn't a defined legal category in this
| context. Even if Facebook is a publisher that doesn't create
| any legal obligations.
| jscipione wrote:
| Publishers are liable for any libel they publish while
| platforms are not, that's the legal distinction. When
| Facebook chooses to editorialize their content like they did
| in this instance they forfeit the legal protections of the
| Communications Decency Act. The first Amendment limits the
| power of the federal government to provide liability
| protection for publishers.
| nradov wrote:
| Bullshit. There is no such liability provision in the
| Communications Decency Act. You should read the actual text
| of the law instead of making things up.
| jscipione wrote:
| "No provider or user of an interactive computer service
| shall be treated as the publisher or speaker of any
| information provided by another information content
| provider" (47 U.S.C. SS 230)
| nradov wrote:
| Yes that is part of the law, but it doesn't mean what you
| claimed. You're completely misinterpreting it. There is
| extensive case law in this area.
| yamtaddle wrote:
| So what _would_ it mean if Facebook were treated as the
| "publisher or speaker" of information they... well,
| publish?
|
| This passage does _something_ by preventing them from
| being so classified. Right?
| nradov wrote:
| That's a meaningless question because it's tangential to
| the Communications Decency Act. Censoring content or
| changing a social media feed algorithm isn't classified
| that way. You might not like the law but that's how it
| works based on the plain language of the statute and
| confirmed through extensive case law.
| yamtaddle wrote:
| Asking about the effect of a passage _in the
| Communications Decency Act_ is a "meaningless question"
| and is tangential to the Communications Decency Act?
| nobody9999 wrote:
| >Asking about the effect of a passage in the
| Communications Decency Act is a "meaningless question"
| and is tangential to the Communications Decency Act?
|
| Given the extensive case law[0] generated since the
| passage of the CDA, yes it is pretty meaningless.
|
| Because that case law _clearly_ defines what those terms
| mean and they don 't mean _aggregators_ like Facebook.
|
| It's reasonable to question, given the moderation choices
| made by entities like Facebook, how much impact they may
| have on public discourse.
|
| However, the meaning of the text of the CDA, and
| especially Section C(1) has been clarified many, many
| times and doesn't mean what you think it means.
|
| Whether that's right or wrong/good or bad is a
| _different_ question. But the question you asked[1],
| given the law and its application over the past 25 years
| or so, _is_ pretty meaningless in the sense that it has
| been repeatedly answered (and that answer is 'no') over
| that quarter century.
|
| [0] https://en.wikipedia.org/wiki/Section_230
|
| [1] https://news.ycombinator.com/item?id=32869002
| yamtaddle wrote:
| You're saying the answer to:
|
| > This passage does _something_ by preventing them from
| being so classified. Right?
|
| Is no, the passage _doesn 't_ do anything (anymore)?
|
| [EDIT] I think you think I think some stuff I don't. I
| don't even know what you're talking about when you claim
| it doesn't mean "what I think it means". You claimed the
| passage doesn't mean what _another_ poster thinks it
| means, I asked what it _does_ in fact mean, i.e. what
| would happen if the passage were absent, and then you
| told me that question was irrelevant (why?), and then
| this post, which also seems to be addressing some other
| person or something... but maybe is addressing what I
| actually asked? I can 't tell.
|
| [EDIT AGAIN] Hell, the wikipedia article you cited even
| seems to back up the (other poster's) interpretation you
| were claiming was wrong. I am so confused.
| liampulles wrote:
| There are UX solutions to the problem of a messy feed. Let me
| go and choose my exclusion filters if I so choose.
| shabbatt wrote:
| Book burning used to be a thing. Now its content moderation.
| bwb wrote:
| I was just talking to an author who published an ad on Facebook
| for their sci fi book. They had the word "beat" in the ad. The FB
| algo said that words intills people to violence, banned her
| account for 2 months, and kept all the ad money.
|
| Funny enough, her book is about the dangers of an algorithm-based
| AI supersystem...
|
| Facebook is the worst.
| cranium wrote:
| It's fearsome how they filter such words without looking at the
| context or giving the benefit of the doubt. Now, you have to
| proactively reword your ideas to make them fit the invisible
| mold. How has newspeak been going for the public discourse ?
| i_like_apis wrote:
| Double thumbs up good.
| Kiro wrote:
| What was the full sentence? I've marketed a lot of games on
| Facebook with much worse words (kill etc) and never had a
| problem. Also, kept money how? Facebook charges you in arrears.
| math-dev wrote:
| Unfortunately on the internet the onus is on the defendant to
| prove innocence. I hardly buy the top comment as true but
| everyone eats it up here since it goes with their narrative
| bwb wrote:
| Serious? It is entirely true, why would I share that
| otherwise? I've been on HN for a long time, see my
| background :)
|
| Your account is totally blank and joined in 2021. You also
| submitted an article about rethinking app development at
| FB. In a comment you also post you work for a large
| company.
|
| Do you work for Facebook?
| math-dev wrote:
| I don't work at facebook but based on your response I am
| more likely to believe you now. Sorry if it caused any
| angst, wasn't intended.
|
| However doesn't change the fact that a lot of things get
| accepted at face value if they align with what someone's
| view of things is
| bwb wrote:
| No worries, I just was kinda stunned you would imply I
| had any motive to not share something accurate. It was a
| convo I had this morning with an author who is a friend.
|
| (It it is just one piece of data, and people are good at
| finding data that supports their viewpoint of course.
| Given all the scandals FB is under and problems on this
| thread, I do think it supports a narrative of FB having
| massive problems around ethics and moral actions.)
|
| Here is a fun story :)
|
| I have an ad account at FB for a company I closed in
| early 2020 due to Covid.
|
| I wanted to delete the account, but FB makes it
| impossible to do that. I message their support and they
| tell me they can remove it, but they need my ID and a
| handwritten letter. I am stunned. A handwritten letter???
| How does that achieve anything :)
|
| So I write out a short note with a bit of snark about a
| tech company needing a handwritten letter, take a picture
| and send it to their support chat/ticket along with my
| ID. They do not like that I was snarky and refuse to do
| anything, even after I remove some of my snark and resend
| it.
|
| Thus, I still have the account and 14 support replies
| later they still refuse to help me.
|
| (note, they didn't want a letter send to them via the
| post office, they literally wanted me to write it and
| send it to them. So weird...)
| math-dev wrote:
| That's crazy to hear! It really feels like the wild Wild
| West with these large tech companies - consider in
| contrast how much regulatory and compliance scrutiny a
| bank would have, they would never dream to behave like
| petulant little kids. I guess the more fines big tech get
| and the more regulations enter the space, the faster they
| will clean up their act.
| koolba wrote:
| The details matter big time. There must be tons of ads with
| phrases like " _Beat addiction_ ", " _Beat the crowds_ ", or
| even anything having to do with music.
| scarmig wrote:
| I don't know anything at all about FB advertising, but is there
| any way someone could use canary accounts to rollout an ad to
| prevent this? I.e. start with a small ad buy in a low
| follower/whatever account, wait 24 hours for it to be flagged,
| and if it doesn't trigger any reviews, roll it out to the
| main/real account?
|
| Not that that's practical for a mom and pop shop.
| erehweb wrote:
| Sounds like a good way for canary and main to get banned for
| some policy violation.
| prepend wrote:
| The weirdest is the email message says they reviewed and
| upheld.
|
| Is the review not human? Do they run it through the same
| algorithm a second time?
|
| Do they lie about the review?
|
| Is the reviewer so out of touch they don't recognize real
| violence vs just the use of "beat?"
| Tao3300 wrote:
| The human review is probably someone who doesn't understand
| the language verifying that a word is present for fractional
| cents on Mechanical Turk or something.
| michaelmrose wrote:
| It could well be that they are paid very little, the majority
| of reports are true positives, have a quota to make, and
| expect no consequence for false positives. If its outsourced
| they may have a bad grasp of the language as well.
| z9znz wrote:
| > no consequence for false positives
|
| This could be the key point. Penalties for letting
| something bad slip past, but no penalties for falsely
| flagging. It's a pragmatic solution, but it is almost by
| definition inhuman.
| lvxferre wrote:
| >Is the review not human? Do they run it through the same
| algorithm a second time? Do they lie about the review?
|
| Yes, yes, and yes. There's a context-illiterate AI deciding
| what you can say or not, even if the later heavily depends on
| context.
|
| And just like it'll trigger a bazillion of false positives,
| it'll also trigger a bazillion of false negatives; someone
| can easily spread hate through Facebook, unimpeded, by simply
| encoding language in a way that the algorithm doesn't
| understand, but humans do; or with simple irony.
| z9znz wrote:
| I think the algorithms are just minimum standard efforts
| which provide enough plausible deniability for FB to be
| able to argue that they provide safeguards on bad content.
| thriftwy wrote:
| How come they get to keep money for the service not provided?
| Should be strictly illegal.
| pc86 wrote:
| It is, and they don't. FB ads are charged in arrears, no
| money would have been kept for ads not displayed. The GP is
| misinformed (at best).
| [deleted]
| wodenokoto wrote:
| It probably is, but not all jurisdictions have a small claims
| court and even those that do, it can be quite cumbersome
| compared to the lost money
| codingdave wrote:
| Criminal cases do not go to small claims court. If you sue
| them in a civil case, yes, that could be small claims, but
| if you convince a prosecutor to charge them criminally, it
| is a completely different legal process.
| kelnos wrote:
| And if you do it, Facebook will probably retaliate by
| permanently closing your account.
| philipov wrote:
| In the case in question, Facebook _began_ by permanently
| closing the account in a capricious and unprovoked
| attack. Retaliation is irrelevent: Facebook is an
| aggressor.
| sgjohnson wrote:
| I'd really like to see this. And after that I'd like to
| see a jury hit Facebook with 12 figures in punitive
| damages in a class action case.
| quickthrower2 wrote:
| Doesn't FB have the right to not do business with you?
| What is the basis for the suit?
|
| Anyway I do remember FB's "a cool "open source" way to do
| front-end development, feel "free" to use it, you can't
| sue us though". So maybe there is some shitty clause like
| that when you sign up.
| sgjohnson wrote:
| > What is the basis for the suit?
|
| Withholding your money and then when you (rightfully) sue
| them for that, retaliating and kicking you off the
| platform entirely?
| hoppla wrote:
| Was it not a clause in the react js or so, that you lose
| the right to use the framework if you sued Facebook?
| sgjohnson wrote:
| React is MIT licensed, so no.
| yamtaddle wrote:
| > Was it not a clause in the react js or so, that you
| lose the right to use the framework if you sued Facebook?
|
| This is true (if we take "was" literally). Though IIRC it
| was only if you sued them over _patents_.
|
| > React is MIT licensed
|
| And this is also true (now).
| michaelmrose wrote:
| There has been one 12 figure judgement in history and it
| was from big tobacco lying about the danger of their
| product which has killed millions of people over decades.
|
| In this situation there would be no judgement because
| there is no right to have a facebook account therefore
| they can close your account because they don't like your
| face or indeed because you sued them. Why would you
| imagine that a jury would basically award you all
| facebook's money because they closed your account? Yes
| sir Mr soandso they were clearly jerks I award you
| Facebook now try not to be as big a jerk as Zuck and good
| day to you!
| quickthrower2 wrote:
| Small claims are designed to be quite easy, and almost
| free. (You might have to spend money on registered mail).
| Worth it for a gripe. I was prepared to do this for a hotel
| refund, where they were going to charge 20% cancellation
| fee where I saw that in local law, while there is no set
| max fee, it should be just to cover reasonable costs.
|
| I looked into it and while a hassle it is on par with
| renewing your car insurance level of hassle (so some
| hassle, but doable, a "side project"). And worth it for the
| "stick it to the man" factor. In the end I got it almost
| all back by being nice, so no need.
|
| In addition to small claims, there are credit card charge
| backs.
| actionfromafar wrote:
| You can probably go to arbitration, which you agreed to. This
| is what companies often require in their terms of service.
| It's going to be costly for one of the parties.
| ticviking wrote:
| I really wish more of us used that system. The laws
| involved aren't perfect but they're better than trying to
| get a response from their non-existent customer support
| systems.
|
| I believe that in many states the company that demanded
| arbitration has to bear the costs also.
| z9znz wrote:
| Terms of service probably have a clause which says that if
| you violate their policies you forfeit your expenditures.
| Covzire wrote:
| Personally I think Zuckerberg might be a sociopath.
| icare_1er wrote:
| kmeisthax wrote:
| There's another way for Facebook and other social media
| companies to make sure this doesn't happen without employing
| stupid algorithms that censor people. Drop the bots, hire a
| bunch of people to pretend to be ISIS or Stormers, and ban[0]
| everyone they come into contact with. Leave everyone else
| alone.
|
| They won't do this. Why?
|
| The way that social media is structured - and the way
| Facebook et all make their money - is to get as many people
| as possible on platform, get them addicted to the platform,
| and then show ads. One of the easiest ways to do this is to
| provoke and generate outrage - as you can see on Twitter,
| which basically exists to turn people into public figures and
| then into "villains of the day". This is also why their
| systems reject context; because stripping speech _of_ its
| context is the easiest way to _construct_ a villain of the
| day.
|
| The reality is that these outrage groups actually tend to be
| really, really small and close-knit. There's a small handful
| of people who actually feed the algorithm new extremist
| content, and everyone else parrots them without thinking much
| of it because they're angry. Extremism only looks large and
| prevalent _because social media is designed to create echo
| chambers and manufacture consent_. And extremists just so
| happen to be Facebook 's best customers - people who already
| have outrage to play to, who will spend hours on platform,
| and so on. Of _course_ they aren 't going to ban their
| whales!
|
| However, an ineffective bot that just randomly bans things
| that sound vaguely extremist-like? That gives you the
| appearance of Doing Something, without actually doing it, and
| it falls in line with the usual Silicon Valley protocol of
| "if it's not worth doing at scale, it's not worth doing at
| all". Treating extremism like a weirdly-shaped spam problem
| gives social media companies cover and is how we get stupid
| bots that think a Holocaust movie is Nazi propaganda.
|
| [0] I do not consider taking down the content of jihadis or
| neo-nazis to be censorship. Jihadis and neo-nazis are groups
| with explicit, stated goals to do violence to groups of
| people for what they say or believe. An ISIS beheading video
| is not an artistic statement or a political diatribe, it is a
| threat to other Muslims. "Get in line or you'll be next."
| Allowing this to be spread around as if it were speech
| accomplishes the goal of censoring non-extremists.
|
| If you want to argue that tankies or ANTIFA do this too,
| _fine_ , but the poster I was replying to was specifically
| pointing out right-wing extremism.
| mrpopo wrote:
| No, this is the result of the laziness of social networks to
| look for automated solutions to solve at-scale (read: at the
| lowest cost) problems of their own making.
|
| More generally this is the result of giving free rein on
| global discourse to unregulated companies, who ultimately
| took full advantage and profiteed off of it.
| icare_1er wrote:
| dane-pgp wrote:
| > the left, with its eternal desire to monitor what
| everyone says or even thinks.
|
| Would you say that the NSA is a left wing organization
| then?
| anshorei wrote:
| No, but the opposition that the left once fronted to the
| NSA's surveillance has withered in the past ten years or
| so and (some) republicans have picked up the slack. Not
| too long ago the "war on terror" was considered to be the
| road to authoritarianism, now democrats are openly
| championing making it a new domestic war on terror. To
| those who opposed the war on terror from the beginning
| it's scary how quickly this opposition was abandoned by
| some once they could be the ones to wield it.
| acdw wrote:
| I'm really bothered by the way you casually conflate an
| actual war in the middle east---the longest war in
| American history, by the way---with a metaphorical war on
| domestic violent extremism and neonazism. these are two
| very different things.
| michaelmrose wrote:
| Domestic terror is a real threat. We are less than 2
| years from it almost handing our nation over a fascist
| coup and weirdos trying to kidnap and murder a governor.
| I think we could afford to both respect our populations
| civil rights and attend to the real threats on the
| horizon.
| philippejara wrote:
| Democracy isn't a king of the hill match, if some loonies
| managed to take over the capitol they'd just get siege'd
| out.
|
| How do you honestly think this would go? they take the
| capitol and trump shows up and says "im the president for
| another term" then everyone goes home and ignores the
| corpse of Mike Pence?
| dane-pgp wrote:
| > How do you honestly think this would go?
|
| They destroy the certifications of the electoral votes
| from the states (as almost happened[0]), causing various
| states to (disingenuously) disagree about how to replace
| the lost documents.
|
| Enough FUD (and lawsuits, and delays) would be generated
| during this period of public disorientation that the
| Republican party could exploit the ambiguity of the
| Constitutional phrase "a majority of the whole number of
| Electors appointed"[1] and trigger the contingent
| election procedure described in the Twelfth Amendment.[2]
|
| Since a majority of states at the time had Republican
| representation, they would have elected Trump and the
| Democrats would have not been able to stop them, even if
| Trump did eventually let them back into the Capitol.
|
| [0] https://www.businessinsider.com/senate-aides-rescued-
| elector...
|
| [1] https://en.wikipedia.org/wiki/Electoral_Count_Act#Maj
| ority_o...
|
| [2] https://en.wikipedia.org/wiki/Contingent_election
| philippejara wrote:
| So your thinking is that the republican party, which by
| and large already disagreed with trump(to the extent a
| republican was the alleged "target" of the riot), would
| side with him after his supporters murder Mike Pence,
| said republican?
|
| You can dislike republicans all you want but this is just
| another level.
| sidewndr46 wrote:
| If the US is so fragile that a box of paper is somehow
| key to the stability of an entire empire, we're pretty
| much doomed.
|
| Also your statement that "Since a majority of states at
| the time had Republican representation, they would have
| elected Trump" is laughable. It's well known that Trump
| asked multiple Republican governors to "find some votes"
| and they obviously did not do this. Even if someone is a
| demagogue that doesn't make them willing to commit a
| career ending felony. After all, being a demagogue got
| them a long way. Not by falsifying election documents.
| nobody9999 wrote:
| >Also your statement that "Since a majority of states at
| the time had Republican representation, they would have
| elected Trump" is laughable.
|
| Actually, it's not. The way this works (I really hope
| you're not an American, because if you are you really
| _should_ know this) is that if the counting of electoral
| votes is sufficiently disputed (in this counterfactual
| case, the vote certificates were destroyed), deciding the
| outcome of the election rests with the US House of
| Representatives.
|
| If that were (and it has, several times in US history),
| to happen the members of the House would vote (on a state
| by state basis, not each representative voting
| individually) on who was to be the President.
|
| Since a majority (26 or 27 out of 50, IIRC) of _states_
| have Republican majorities in the number of House
| members, a House vote would likely have gone Trump 's
| way[0].
|
| This is something of a peccadillo of the US Presidential
| electoral system that probably should be reformed[1], but
| it currently is the law of the land.
|
| [0] Which is why it was so important (at least for the
| Trumpists) for the proceedings to be disrupted. It was
| the final opportunity for them to overturn the clearly
| expressed will of the people.
|
| [1] Because regardless of which partiy's state cohorts
| have a majority, that's a supremely undemocratic way to
| choose the winner and is a relic of the state of the
| states (as essentially separate nations banding together
| for defense) in the late 18th century.
| sidewndr46 wrote:
| Under that logic, the US was over thrown back in 2000
| when the supreme court decided the outcome of an
| election.
| sidewndr46 wrote:
| If you think a plot to kill a governor is somehow
| destabilizing to the United States, you should probably
| read about Rod Blagojevich. The guy tried to sell a US
| senate seat. Corruption has a much farther reaching
| effect than any single assassination.
| icare_1er wrote:
| The fact that the NSA monitors, does not mean that
| leftists don't. What is more, the NSA does the spying and
| monitoring, but does not do the sentencing. If I say that
| a woman does not have a penis, the NSA will probably know
| it but that stops here. However, the leftists will want
| to know it, and then, will want to censor me.
| amanaplanacanal wrote:
| Removing incitements to violence seems like a good thing,
| and I would guess their customers (the advertisers) fully
| support it. The fact that they do it so badly is entirely
| on them.
| nradov wrote:
| Is removing incitements to violence always a good thing?
| Apparently Facebook makes an exception when it comes to
| violence against the Russians invading Ukraine.
|
| https://www.reuters.com/world/europe/exclusive-facebook-
| inst...
|
| I do support the right of Ukrainians to defend their
| country. But I'm not comfortable with giving social media
| corporate employees the power to decide which violence is
| good and which is bad.
| icare_1er wrote:
| FB has literally unbanned groups that were considered
| nearly terrorists several years ago (Azov), the second
| they became anti-Putin.
|
| Goes to show how grotesque those making the rules on Good
| and Bad are - I am sure we can find similar stuff
| regarding Saudi Arabia, which is pretty much a Daesh that
| succeeded at gaining power and keeping it.
| icare_1er wrote:
| michaelmrose wrote:
| You needn't read or analyze everything just the stuff
| that is reported and since 90% of the problem is 1% of
| individuals if you can keep the same old folks from
| coming back you will ultimately have less to process. The
| logical thing is that the kind of human moderation that
| they actually needs costs money and you acquire said
| funds by charging monthly. This has the side effect of
| making it easy to permanently ban problem children via
| their address and or method of payment. One need not take
| prepaid cards either.
| Bakary wrote:
| Censorious ideology is a minor point compared to the
| overall picture. Haphazard content removal wouldn't be an
| issue if the space wasn't controlled by a handful of
| private companies that routinely abuse their power.
| sidewndr46 wrote:
| If they report that money as advertising revenue, wouldn't FB
| be committing securities fraud?
| protomolecule wrote:
| Would you share the link to the book? Genuinely interested in
| reading
| bwb wrote:
| Yep, it is 5 Stars by Louise Blackwick:
| https://www.amazon.com/5-Stars-Louise-Blackwick-
| ebook/dp/B09...
|
| Really cool author out of the Netherlands, I highly recommend
| the book. It is dark sci fi, and she calls it "Neon Science
| Fiction". Which sums the story up nicely (dark and gritty
| with a flippant attitude)
|
| I believe the line she got hit with was "Can You Beat The
| Neon God's Algorithm?"
|
| She also did a list on my site about books that inspired her
| creation of neon science fiction (and how she defines that):
| https://shepherd.com/best-books/inspired-neon-science-
| fictio...
|
| Let me know what you think if you read it! There was one
| scene in the book I can't get out of my head... ever...
| protomolecule wrote:
| Thanks, I'll give it a try. Interesting site, btw.
| bwb wrote:
| thanks, it's a really fun project! I launched it on HN in
| April 2021 :)
|
| All the topics are hooked into Wikidata on the ML side,
| eventually I want to build knowledge graphs using the
| Wikidata info and play with historical timelines etc and
| see what I can do...
|
| I am working on adding individual book pages and then
| genres and age group data is next.
| bhedgeoser wrote:
| > Funny enough, her book is about the dangers of an algorithm-
| based AI supersystem...
|
| Skynet is here.
| petre wrote:
| By the same twisted logic, the Beat Generation instills people
| to violence.
|
| https://en.wikipedia.org/wiki/Beat_Generation
| WesolyKubeczek wrote:
| What about the Beatles?!
| marginalia_nu wrote:
| Beat less? Seems to promote peace.
| panxyh wrote:
| But less is more
| [deleted]
| manholio wrote:
| Les might not agree.
| protomolecule wrote:
| Algorithm-based AI supersystem proactively defends itself.
| herbst wrote:
| I realize this was a weird time and masks have been a
| complicated topic. However:
|
| I advertised some masks my grandma has sewn on FB. It was ok,
| expansive clicks like always but ok. This was running for at
| least a week or two. But when I changed some text it triggered
| a new ad review and within no hour my account was banned. It
| didn't even took a hour to deny my appeal as well and my
| account was lost forever (without ever naming any reason)
|
| This account had 50+ (harmless) groups with way over 100k
| followers as well as a few thousand dollars in ad spend just
| that year. I didn't socialize at all on Facebook which I guess
| made my account fishy to some degree but all data was correct
| and passport verified.
|
| This is the reason I completely ignore Facebook and all its
| platforms for whatever reason these days. I don't care if it
| could help my business, it's not worth the trouble.
| criddell wrote:
| Did you ever consider starting the arbitration process with
| Facebook?
| herbst wrote:
| I am not aware that is a thing. My googling back then led
| me to believe that there is nothing after the appeal.
|
| But honestly I don't care anymore. Their ad platform barely
| worked for me and their numbers didn't match my log numbers
| for years before this happened.
|
| I haven't had any other use for the platform either way.
| EveYoung wrote:
| In my experience, Meta doesn't care about you unless you have
| a multi-million budget or your buying through an agency that
| has the right connections.
| z9znz wrote:
| I think we have even seen one or two examples here on HN of
| multimillion dollar accounts which hit inexplicable walls
| suddenly.
|
| I don't know if any one customer has enough money to get
| proper first class human service with Facebook.
| ls15 wrote:
| > This is the reason I completely ignore Facebook and all its
| platforms for whatever reason these days. I don't care if it
| could help my business, it's not worth the trouble.
|
| That's basically my stance since 2004 when I first heard
| about them. I lost some opportunities, I guess, but it also
| allowed me to tell everyone how I feel about data privacy
| while keeping a straight face.
| [deleted]
| pessimizer wrote:
| > it also allowed me to tell everyone how I feel about data
| privacy while keeping a straight face.
|
| Same here. I not only don't condemn people for using
| Facebook, I almost always defend their use of Facebook. But
| since I mean what I say when I'm talking about the site, I
| haven't had an account in at least a decade and I block the
| domains. I dislike the site, not its users.
| ahallock wrote:
| Beats by Dre must have a difficult time.
| ZiiS wrote:
| They may well have meant that "beat"; given Apple's push back
| on their tracking.
| knightofmars wrote:
| This reminds me of the short story "Computers Don't Argue" by
| Gordon D Dickson. The format is a series of correspondence
| between individuals regarding a book. The plot starts with a
| man attempting to deal with the situation of having been mailed
| the wrong book by an online retailer. It then proceeds to take
| a turn into the dark.
|
| https://www.atariarchives.org/bcc2/showpage.php?page=133
| knodi123 wrote:
| Yeesh, that's horrifying. Written 1965?!?!? Prophetic.
|
| I once had a similar deal when I lived in a house with
| several guys for a year in college. We divvied up the
| utilities, and I was in charge of the phone bill. I set up
| automatic payment, and never had a problem. But 5 years
| later, the phone company accused me of skipping my June
| payment, _and carefully deducting that amount from the "total
| owed" amount in every subsequent month_. It's insane, and
| every human I spoke to agreed with me, but no one had the
| power to do anything about it. I could either travel cross
| country to go to court, with no evidence other than common
| sense, or just pay the $30 bill plus another $40 in
| penalties.
| Trifectaffe6 wrote:
| gambiting wrote:
| >>This is fundamentally anti-White hate-porn that plays into
| tired and often ridiculously racist "Nazi" stereotypes.
|
| I literally cannot believe anyone would write this in earnest.
| Please tell me it's some kind of (very poor) joke.
| thebooktocome wrote:
| Whether or not OP is serious, people holding opinions like
| this (e.g., it's possible to be racist against Nazis, who
| incidentally are not a race but rather a political
| affiliation) certainly exist. It's a sad state of affairs.
| throwawayhl wrote:
| Give that Germany and Japan committed roughly equivalent
| atrocities, you'd expect that there would be a roughly
| equivalent denunciation in our literature.
|
| The fact that we don't could be for many legitimate reasons
| but on reason could be that Nazis make for great punching
| bags by proxy for what the what the author really wants to
| say.
|
| Kind of like how someone who says "thug" really wants to
| say something else.
| PurpleRamen wrote:
| Against whom did Japan executed a genocide?
| Timshel wrote:
| Not the best place to play with semantic. There is a
| reason there is deep animosity between japan/china and
| japan/korea.
|
| Just over 37-45 estimated war crimes resulted in 3 to 30
| millions death ...
| PurpleRamen wrote:
| Timshel wrote:
| ? You do not end up killing millions of peoples without
| systemic targetting and industrialized organisation. The
| fact that it might be called genocide or democide or
| something else is semantic ...
|
| Anyway my point (albeit unclear) was more that you are
| arguing the premise of the parent post : "Germany and
| Japan committed roughly equivalent atrocities" which I
| believe is quite difficult and in a way reinforce the
| parent since he appears to believe there is a systemic
| diabolisation of the Nazi as way to target by proxy the
| white (anyway that's how I interpret: "great punching
| bags by proxy for what the what the author really wants
| to say")
|
| Instead of arguing his association : "you'd expect that
| there would be a roughly equivalent denunciation in our
| literature." which I believe is way easier to refute and
| in the specific case of this movie the choice of the nazi
| might be easier to explain with the director parent
| history and not an hidden agenda.
| PurpleRamen wrote:
| > ? You do not end up killing millions of peoples without
| systemic targetting and industrialized organisation.
|
| No, they do. That's how any war runs. Countries don't
| start wars with plans on for wiping out the citizens in
| the most efficient way. If a massacre happens, then it's
| something more spontaneous, happening on the spot.
|
| > The fact that it might be called genocide or democide
| or something else is semantic ...
|
| No, it's a significant difference in intention and
| execution. Germany had an elaborated logistic for
| transporting their victims through the country and
| continent. They had death camps with full planning on how
| to kill people. They even started the war with the side-
| intention to "cleaning the world".
|
| Japan, like every other invader did nothing like this.
| They started wars and accepted that people will die, but
| this was not their goal. And neither were the massacres
| and other crimes their goal, it just something that
| happened along the way.
|
| > Anyway my point (albeit unclear) was more that you are
| arguing the premise of the parent post : "Germany and
| Japan committed roughly equivalent atrocities" which I
| believe is quite difficult and in a way reinforce the
| parent since he appears to believe there is a systemic
| diabolisation of the Nazi as way to target by proxy the
| white (anyway that's how I interpret: "great punching
| bags by proxy for what the what the author really wants
| to say")
|
| Sure, but your reasoning is wrong. Japan did bad things,
| but they are just one of many evil empires in history.
| It's not exceptional unique in what they did. But the
| Nazis were unique and exceptional. That's where your
| argument fails. Japan is just one of many, and people
| denounce those many equally, more or less. But there was
| just one event on the scale of the Nazi-Crimes.
| thebooktocome wrote:
| https://en.wikipedia.org/wiki/Nanjing_Massacre
|
| https://en.wikipedia.org/wiki/Manchukuo#Abuse_of_ethnic_m
| ino...
|
| https://en.wikipedia.org/wiki/Korea_under_Japanese_rule
| inglor_cz wrote:
| I think the main reason for the discrepancy is that Nazis
| were active in Europe, which was culturally much closer
| to the US than East Asia.
|
| For example, we have a lot more literature about the
| Holocaust than about Japanese atrocities in the Far East.
| One of the reasons is that the European population of the
| 1940s had much higher literacy levels and so there were
| enough people to actually write their experience down.
|
| But it is also way easier to translate from, say, German
| or Dutch into English, than to do the same for Tagalog or
| Burmese.
| Timshel wrote:
| pessimizer wrote:
| > with atrocities which are not well known to western
| audiance
|
| You say that as if this is a position the west found
| itself in rather than put itself in. Westerners generally
| aren't interested in Chinese corpses except as a
| justification for making more of them.
| Intermernet wrote:
| There's a sector of society that unironically uses phrases such
| as "anti-White hate-porn".
|
| Do you really want to be part of that sector?
| thaufeki wrote:
| I'm sorry, appealing to in-group/out-group bias is a horrible
| way to address a point, regardless of what the point is.
| Intermernet wrote:
| Debate is literally appealing to in-group/out-group bias.
| The purpose of debate is to try to influence that bias.
|
| The weird thing about debate is that the most ridiculous
| claims have the most evidence against them, and are
| therefore the most work to refute with sources. Claiming
| that the earth is flat takes a lot more evidence to refute
| than claiming that the earth has slightly incorrect
| parameters of oblate spheroid-ness.
|
| This is played upon by certain people. Trying to explain
| why a concept such as "anti-white hate-porn" is just not a
| thing is like trying to explain how we're pretty sure we
| actually landed on the moon.
|
| This comes back to the fallacy of "equal time for both
| sides of the debate".
| Trifectaffe6 wrote:
| Intermernet wrote:
| "the West has a prominent anti-White culture"
|
| The West is based on an Anglo-Saxon empire that has
| traditionally used the indigenous people of the countries
| it dominated by violence as slaves, or treated as second
| class citizens at best, or not even human at worst.
|
| I'd recommend reading some history. The behaviour you see
| as "anti-white" is almost always an expression of
| survival against the most oppressive regime in history.
| Trifectaffe6 wrote:
| cauefcr wrote:
| >Indigenous Europeans are being maliciously
| demographically replaced in their ancestral homelands as
| part of a slow (mostly) bloodshed-less genocide.
|
| Not only is this obvious bullshit, but migrating to
| Europe does not even come close to constituting
| colonization, and is way milder than the whole neo-
| colonialism bullshit you guys have done on the americas,
| africa an asia, where europeans raped, pillaged,
| destroyed, and enslaved people.
|
| This is quite literally Nazi talking points you're
| repeating, take a good look at yourself.
| Trifectaffe6 wrote:
| cauefcr wrote:
| So migration == colonialism == mild {rape, pillage,
| destruction, enslavement} in your muddy logic?
|
| Have more kids if you're worried about your culture
| growth (though it may be hard to find a mate holding your
| views), kicking immigrants out or something is not going
| to prevent your culture from becoming irrelevant, or
| being outgrown by other cultures, just look at japan,
| slowly rotting economy from fear of immigrants
| "replacing" them.
|
| Also, cultures have no easy delineation, a person can
| have multiple heritages, and can have habits from distant
| cultures, it does not follow that your country's culture
| == white christians, no matter where you are, or
| something of the like. Say what you actually mean, that
| you dislike different people living near you, doing
| things you wouldn't do.
| Timshel wrote:
| hatzalam wrote:
| Myself and at least one of my friends have been harassed by an
| Instagram account whose name is "white_soupremacy". They
| seemingly have no content, and the sole purpose is to
| harass/troll other accounts. I reported this account to IG, but
| since the spelling is cheeky itself, it wasn't flagged as hate
| speech. Furthermore, IG didn't even give me an option to have a
| manual review of the account. I DMed someone I know that works at
| Meta, gave them all of the relevant information, but the account
| is STILL up even after a PM at Meta submitted an internal report.
| Something is deeply broken in that system.
| jleyank wrote:
| I assume they ban pretty much everything about the period 1925 to
| 1945 as race, national origin and probably gender was real
| prominent. And talk about the violence...
| teh_klev wrote:
| I follow a FB group where the topic is the International
| Brigades, many of the discussions and posts are about the
| Spanish civil war, though not exclusively. Race, national
| origin, sex and violence are freely discussed and the group
| doesn't seem to attract any untoward attention by FB's algo's.
| jleyank wrote:
| I would think that inconsistency in how things are banned
| would increase FB's legal risk, but maybe they (like all rich
| things) are exempt from legal risk?
| wellbehaved wrote:
| Censorship morons on parade. Obviously we need to return to a
| First Amendment ethos. Obviously we never should have abandoned
| it in the first place.
| lbriner wrote:
| There still seem to be some many basics that FB and YouTube get
| wrong with their "automatic" video moderation.
|
| Use people's "report" button clicks as an early indicator that
| something should be looked at - a big flag.
|
| Look at certain words or phrases that are either unambiguously
| bad or possibly like "kill" or "why I hate ..." - a medium flag
|
| Phrases that are less certain, = a small flag.
|
| Block some things proactively, moderate others, react to the
| others as they are reported. I don't think people are bothered
| that bad stuff gets posted as much as these companies not doing
| anything about it when it is reported. If 1000 people flag a
| video, up to the top of the list. If the flagging is malicious,
| then downgrade the reputation of the accounts that flagged it. If
| new accounts upload stuff, rate-limit it in some way etc.
|
| I know I am making it sound really easy but with how ever many
| 1000 Developers, it shouldn't be impossible for someone like FB
| to do this much better.
| greggeter wrote:
| Kerrick wrote:
| "Why I hate killing spiders & how to ethically rid your house
| of pests" -- Not flag-worthy at all.
| humanrebar wrote:
| Browse urbandictionary and you'll find a lot of benign nouns
| that are insulting or that are slurs. Prediction: there's a
| definition of 'spider' that is offensive up there... be right
| back.
|
| Let's see... almost? There's one referring to an unattractive
| and disproportionate person:
| https://www.urbandictionary.com/define.php?term=Spider
|
| I know urbandictionary has a lot of made up stuff, but the
| point is that slang is sometimes coded intentionally.
| Especially in unpopular niche subcultures.
| benj111 wrote:
| We really need to stop thinking about social networking sites as
| private companies, and think about them as the public spaces they
| actually are. It isn't good for society for a few companies to
| have a strangle hold on what can be said online.
|
| I do kind of feel for them in some ways because you have
| different nations with different norms and laws, but the answer
| isn't some lowest common denominator, and ridding the web of
| anything, anyone may find objectionable.
| philippejara wrote:
| Well you shut down the idea yourself, would be completely non-
| viable due to jurisdiction issues, unless you'd want to follow
| the US position and even then you'd run into issues.
|
| The biggest problem I'd say isn't necessarily the websites
| themselves but stuff like the app store and google play store,
| and things like infrastructure providers. Having your social
| media be removed from the apple store/google play store is
| basically killing them in the mass market, and we all know how
| incredibly selective they are especially when it comes to
| forbidding platforms that may contain porn(like tumblr for
| example and initially gab) while also but still allow twitter
| that is completely inundated with it. And that's before
| mentioning the downright mental idea that payment processors
| can just decide to not work with you anymore.
|
| The social media itself I feel is the least of the things that
| would benefit from being treated closer to a public "utility"
| for the lack of a better word.
|
| We're way past the point where people can just build their own
| stuff and be independent when targeting a significant
| audience(and sometimes even a smaller niche one), you need the
| support of payment providers, you need the support of app
| stores, you need the support of ddos mitigation companies/cdn's
| (especially in a post IOT world where your toaster is part of a
| botnet), and the list goes on.
| benj111 wrote:
| I didn't shut down the idea. I just raised the counterpoint.
|
| If Europe decided to say that you can't take down things
| arbitrarily. What would Facebook do about that? At this point
| in time a geographically splintered internet with freedom of
| speech seems better than what we've got at the moment.
| philippejara wrote:
| I do believe the counterpoint kills the idea, is what I
| meant. A geographically splintered internet(or rather,
| geographically splintered "social medias") would just lead
| to a service showing up that is international and will grow
| a majority again and the whole cycle begins anew.
|
| The only way I can see this happening would be different
| instances for each country, as in things would get
| moderated based on the country laws in question, which I
| suppose could work in theory, but at that point having
| enough support staff to actively moderate on a country by
| country basis is doubtful given how it is currently.
| benj111 wrote:
| Well presumably moderators need to speak the various
| local languages. So I don't see why you can't extend that
| to don't ban people in territory X for mentioning blue
| eyes.
|
| I understand they don't put enough effort into
| moderating, because obviously mentioning blue eyes isn't
| anything. But they should be held accountable for that,
| not getting away with it, as is the situation we have
| now.
| themitigating wrote:
| Why should we consider them public spaces when they are private
| companies?
|
| And social media companies only control what is said on their
| platform not "online" aka the entire internet as you said
| ben_w wrote:
| > Why should we consider them public spaces when they are
| private companies?
|
| As the other commenter is suggesting a change to the status
| quo, what legal status they have now isn't important, but
| instead what matters is role they serve in society.
|
| Personally I think the US idea of freedom of speech is too
| anarchic to be sustainable, but even with that, the de-facto
| power that Big Tech has over communication (and commerce)
| means I think Big Tech should be held to a similar standard
| in this regard as any government.
| [deleted]
| Zealotux wrote:
| The reason is the movie being titled "Beautiful Blue Eyes", we
| live in a time of pure insanity.
| hatware wrote:
| Hackbraten wrote:
| German here. Not entirely sure what your comment is trying to
| convey. Germany doesn't have that much of a free-speech
| history, at least compared to the US. So the fact that
| publicly denying or trivializing the holocaust has been
| illegal (since 1949, I figure) is not seen as dystopian at
| all. On the contrary, the ban is even widely accepted, and
| perceived as a good thing.
|
| Mind clarifying a bit as to what exactly you're criticizing?
| 20amxn20 wrote:
| mjburgess wrote:
| I suppose if your country has just perpetrated a
| genocide, one might have unusual moral responsibilities.
| 20amxn20 wrote:
| feoren wrote:
| The part that's always missing from this kind of argument
| is: then you're wrong. You are incorrect. It's mind-
| boggling to me that the actual fact of what actually
| happened never seems to matter in this argument. Look, I
| understand the _practical_ problems with a law like
| "It's illegal to lie on the news" -- of course that's
| problematic: who decides what a lie is? But if you could
| guarantee unambiguous, 100% accurate, oracle-level
| determination of lies, then that law would be _fantastic_
| for society. That 's of course not possible in general,
| but that doesn't mean there aren't very special cases
| where we can get close to that. There are some things
| that we know for 100% certain definitely happened, and
| also that certain awful people have certain horrifying
| motivations to lie about. I'm totally fine with those
| being illegal to lie about. If you disagree with me about
| it, it simply means you are an awful piece of shit.
| Again, that's not true in general, about any opinion or
| controversial issue: of course it's not! But it's true
| about the Holocaust, and that matters.
| 22amxn22 wrote:
| mjburgess wrote:
| The community which committed that genocide has the
| prerogative to decide its own moral responsibilities. If
| you're part of that community, and violate them, then the
| consequences will be as they wish.
|
| Its deeply implausible to say that Germans have no
| collective right against the individual here. What you
| wish to say really isnt all that important, and doubly
| so, when many around you were wrapped-up in a system of
| mass torture, genocide and violence.
| HideousKojima wrote:
| >Germany doesn't have that much of a free-speech history
|
| "We've always suppressed freedom of thought and expression,
| so why should we start now,?"
| hatware wrote:
| "What are rights? Haha silly American"
| hatware wrote:
| You can defend it all you want with "historical precedent,"
| it's still quite dystopian whether or not conditioning for
| 70 years has any impact. My point is that any law against
| free speech probably has more sinister intentions than are
| presented.
|
| "Do you have any proof?"
|
| Read 1984 and try to understand how close laws against
| holocaust denial are to thought crimes.
|
| You really don't understand what part of the timeline we're
| in. Time is running out.
| [deleted]
| LegitShady wrote:
| https://www.youtube.com/watch?v=dLAi78hluFc
| mynameishere wrote:
| Timshel wrote:
| Ironic ? per the article : "the film's title, which refers to
| the eye color of a child who perished at the hands of the
| Nazis and invokes a key scene in the movie"
| SamBam wrote:
| That doesn't mean it's not ironic. It strains belief to
| suggest that the film makers weren't aware of Hitler's
| belief that being blond and blue-eyed was a mark of the
| "superior" Aryan race
| pessimizer wrote:
| ITT, both bigots and anti-racists pretending that they
| know someone's _obvious_ intentions based on no
| information and with no attempt at research.
| dekhn wrote:
| Yes, and? Still not a reason to prevent advertising the
| movie on Facebook.
| job_suche wrote:
| Seems to me that the author was just propagating "the
| blue eyes are so beautiful" sort of white standard of
| beauty, probably not intentionally, and it would be a
| minor thing, except in this case it just does not fit,
| imho, given the topic.
| fredgrott wrote:
| Honest question,
|
| What would happen if we assume that those with low self esteem
| self-select to match the toxicity in their head in choices of
| social media consumption and choice of social media platforms?
|
| Somewhat the kissing-cousin to the observation that those
| 1-percenters on social media that create such as GaryVee do not
| use sm platforms like the bottom 99 percent.
|
| I.E., SM platforms are a mirror in showing that we still have
| somewhat a broken humanity society structure.
| edwnj wrote:
| I've been working with Alex for just over a year now. He's never
| political, super nice guy, focused on creating beautiful art.
|
| This is a major blow to the team. If your not backed by Sony or
| Paramount, ads (especially in the first week) can be the deciding
| factor in whether you make it or not..
|
| When he told me what happened, I refused to believe him "There is
| no way they banned you over blue eyes". Unlike Alex, I'm
| extremely cynical about censorship & social justice politics but
| even I couldn't accept they would do something this asinine.
| prvc wrote:
| > If your not backed by Sony or Paramount, ads (especially in
| the first week) can be the deciding factor in whether you make
| it or not..
|
| Do you seriously believe it was Facebook that caused the film
| to fail?
| unethical_ban wrote:
| Do you seriously challenge the idea that social media
| advertising can be critical to a small film's success?
| edwnj wrote:
| Not fail. The jury is still out.
|
| Independent films largely rely own word of mouth to gain
| momentum. But for word of mouth to work, you need critical
| mass.
|
| This is where ads can be the difference maker. We wanted to
| reach Roy Scheider fans. Those people are not exactly the
| type of people who use TikTok and reply with words like "No
| Cap".
|
| When you're an Independent filmmaker, you have to maximise
| the value out of every dollar and you have a short window of
| time (weeks) to make this work.
|
| So when facebook bans you from advertising on fb & insta
| right before the release, its a major blow.
| [deleted]
___________________________________________________________________
(page generated 2022-09-16 23:02 UTC)