[HN Gopher] Twitter sued for allegedly refusing to remove child ...
___________________________________________________________________
Twitter sued for allegedly refusing to remove child porn
Author : thereare5lights
Score : 140 points
Date : 2021-01-21 18:09 UTC (4 hours ago)
(HTM) web link (nypost.com)
(TXT) w3m dump (nypost.com)
| jlkuester7 wrote:
| During all the recent debates regarding Parler, much was said
| about how Parler needed effective moderation like Twitter. This
| article is an important reminder that there is a lot more room
| for improvement in the moderation space and even companies that
| "do it right" are still making glaring mistakes.
|
| > We've reviewed the content, and didn't find a violation of our
| policies, so no action will be taken at this time
|
| This is totally unacceptable. I wonder if there is some kind of
| legal liability that could apply here? Taken at face value, this
| message seems to indicate that a Twitter representative reviewed
| content and chose to not remove it which sounds a lot to me like
| knowingly and willfully distributing cp....
|
| (edited to fix spelling of "Parler")
| dragonwriter wrote:
| > This article is an important reminder
|
| The only facts being reported in this article is that someone
| made claims in a lawsuit.
|
| > This is totally unacceptable. I wonder if there is some kind
| of legal liability that could apply here?
|
| Yes. Both civil liability (under the fairly recent exceptions
| relating sex trafficking to 230 protection) and criminal
| liability (section 230 has never applied to criminal liability)
| are possible, for the kinds of things claimed in the lawsuit.
|
| That civil liability is possible is, of course, almost
| certainly a factor in why the lawsuit was filed; while you can
| file a lawsuit where there is no available liability, it tends
| to be a waste of effort.
|
| Of course, one would also do well to take claims made in a
| lawsuit with some skepticism. If those didn't often turn out to
| be untrue, we wouldn't need courts nearly as much as we do.
| BryanBeshore wrote:
| Here's the lawsuit brought by the National Center on Sexual
| Exploitation:
|
| https://www.scribd.com/document/491614044/Doe-v-Twitter
| Hello71 wrote:
| It's worth noting that the NCSE (formerly known as Morality
| in Media) is completely against pornography of any type.
| stretchcat wrote:
| Why is that worth noting?
| trianglem wrote:
| Because if they are biased against pornography they have
| a motive to call things child pornography even when it's
| not.
| stretchcat wrote:
| Why would they waste their time in court bringing a
| lawsuit that would immediately get tossed? They'd hardly
| be able to conceal the age of a child from the court. The
| court will know who the child is, even though journalists
| will (of course!) not be told.
| edbob wrote:
| This is a pointless ad hominem that is completely
| irrelevant to this case. The court can easily verify
| whether the victim was a minor.
| toss1 wrote:
| Exactly, aside from the civil liability already being pursued
| in the reported lawsuit, there should also be criminal
| liability for people making those editorial judgements, as far
| up the management chain as it goes.
|
| This is not some algorithmic failure that no human saw -it was
| specifically examined by a human with managers setting policy,
| and deemed to be acceptable to distribute.
|
| At that point, they are willfully distributing the material,
| and should be held accountable.
|
| Yes, the result may be harsh. Ideally, there would be advance
| notice of the potential legal jeopardy, but ignorance of the
| law is never an excuse, and child porn distribution is widely
| known to be illegal. Standard pull in the workers and get them
| to flip on the managers, repeat all the way up the chain as far
| as it goes.
|
| It should now be obvious to anyone that internet discussions
| are either moderated/edited, or will descend into toxic
| cesspools when any of a variety of bad actors are allowed to
| run unchecked.
|
| The hosts need to be responsible for their editing decisions,
| which are made at far larger scale than any newspaper or
| broadcaster.
| dragonwriter wrote:
| > This is not some algorithmic failure that no human saw -it
| was specifically examined by a human with managers setting
| policy, and deemed to be acceptable to distribute.
|
| Or, you know, the claims in the lawsuit are false.
|
| This wouldn't be the first time that happened.
| dragontamer wrote:
| Considering that this is NYPost, its quite possible. I
| don't really consider NYPost to be a very respectable
| institution.
| ghastmaster wrote:
| > I don't really consider NYPost to be a very respectable
| institution.
|
| Why is that?
| stretchcat wrote:
| Do you think the NYPost fabricated the existence of the
| lawsuit? As far as I can tell, the lawsuit really does
| exist.
|
| Given the nature of the lawsuit, concerning material that
| is illegal to view, I don't think any journalists,
| whether from a fishwrapper like the NYPost or a venerable
| institution like the NYTimes, would be able to verify the
| claims made by the lawsuit.
| ardy42 wrote:
| > Do you think the NYPost fabricated the existence of the
| lawsuit? As far as I can tell, the lawsuit really does
| exist.
|
| Straight up fabrication is not the only way to disinform,
| and it's in fact it's a pretty shitty way to do it, since
| it means your disinformation collapses at the slightest
| scrutiny. To make fabrication effective, the fabrication
| needs to be laundered to give it credibility (see
| https://www.youtube.com/watch?v=tR_6dibpDfo), which takes
| time and effort.
|
| A more effective way to disinform is to pluck out a true
| story that's not representative and amplify or twist it.
| That will seem more credible to those who give it just a
| little scrutiny, even if it's just as wrong as a
| fabrication in some ways.
|
| In this case, if the NY Post has a bone to pick with
| Twitter, it could troll through lawsuits against it until
| it found one that made scandalous claims, then report
| those as hard facts.
| stretchcat wrote:
| Suppose the NYPost has an axe to grind and went looking
| for a lawsuit that makes twitter look bad... so what? The
| lawsuit exists as they claim. That wouldn't mean they're
| lying, it would only mean they have a different idea of
| what constitutes news worthiness than you. That wouldn't
| make it disinformation. Virtually every sentence in the
| article contains some variation of _" the suit alleges."_
| That the suit alleges these things _does_ seem to be hard
| fact. I see no disinformation here.
| ardy42 wrote:
| > Suppose the NYPost has an axe to grind and went looking
| for a lawsuit that makes twitter look bad... so what?
|
| Because that would be more like propaganda than
| journalism.
|
| > That wouldn't mean they're lying, it would only mean
| they have a different idea of what constitutes news
| worthiness than you. That wouldn't make it
| disinformation.
|
| Lying is only one of the _many_ ways you can mislead
| someone. My point was that lying is inferior way to
| deceive than assembling true facts in a deceptive way. So
| the fact that the NY Post themselves didn 't outright lie
| in this story does little to refute the idea that they're
| a disreputable paper.
| stretchcat wrote:
| There is no clear demarcation line between journalism and
| propaganda. Every journalist has biases, every last one,
| and every journalist aspires to report on matters they
| care about (e.g. _" want to grind an axe on"_, which is
| just a way of saying the same thing but with a derogatory
| connotation.)
|
| What matters is whether their reporting is factual or
| not. In this case, there seems to be little doubt that
| the reporting is factual; the reported lawsuit does exist
| and does make the reported claims.
|
| > _does little to refute the idea that they 're a
| disreputable paper._
|
| I have no interest in refuting _obvious truths._
| ardy42 wrote:
| > What matters is whether their reporting is factual or
| not. In this case, there seems to be little doubt that
| the reporting is factual; the reported lawsuit does exist
| and does make the reported claims.
|
| That matters, but it isn't the only thing that matters.
| Again: the fact that the NY Post themselves didn't
| outright lie in this story does little to refute the idea
| that they're a disreputable paper. The way they choose
| facts to report and how they arrange them can be the real
| source of disrepute.
| stretchcat wrote:
| I am not disputing that the NYPost is disreputable, do I
| need to say this more? I am saying the deservedly poor
| reputation of the NYPost is irrelevant in this case
| because the reported facts are easy to independently
| verify. The reputation of a newspaper counts when their
| reputation is what we must rely on, which is not the case
| here.
|
| The NYPost's source is a public filing linked elsewhere
| in this discussion. If you think the NYPost has lied,
| point out the lie. If you think they left something
| important out, point it out. I'm guessing you cannot do
| either.
| ardy42 wrote:
| > I am saying the deservedly poor reputation of the
| NYPost is irrelevant in this case because the reported
| facts are easy to independently verify. The reputation of
| a newspaper counts when their reputation is what we must
| rely on, which is not the case here.
|
| That's where we disagree. You're saying the the only job
| of a paper is to report true facts (or at least try its
| best). I'm saying its job is to report true facts _and
| true impressions_. True facts can be used to create false
| impressions, and the impression created often matters the
| most, especially when you know your readers aren 't going
| to read your story like a careful lawyer.
| RandallBrown wrote:
| I think you're talking about Parler, not Parlor which is a very
| different app.
| jlkuester7 wrote:
| Shoot, you are right. Fixed!
| IfOnlyYouKnew wrote:
| Maybe it's the headline that is misleading you, or it's a
| symptom of our times.
|
| In any case: it should be blatantly obvious that the Twitter
| representative did not know or agree that the person in the
| video was a minor. The insinuation that it's Twitter policy to
| distribute child pornography is laughable. They have absolutely
| nothing to gain from it.
|
| As to liability: Section 230 is about exactly that situation:
| trying to limit damaging material on your platform does not
| create any liability even if you fail. Because the alternative,
| where you either allow your platform to be flooded with
| swastikas and pornography or get sued for every single mistake
| you make, is unworkable.
| edbob wrote:
| > In any case: it should be blatantly obvious that the
| Twitter representative did not know or agree that the person
| in the video was a minor.
|
| The lawsuit alleges precisely that Twitter had ample evidence
| to know that he was a minor. How about we let the case play
| out instead of assuming innocence?
| jdxcode wrote:
| 230 does not protect against criminal liability: https://www.
| techdirt.com/articles/20200531/23325444617/hello...
| kenjackson wrote:
| The article is from the NY Post. This makes me question it
| right out the gate. And I just find it hard to believe that
| Twitter would want to keep something like this up, since it
| is illegal content. There is no incentive for them to do so,
| and the NY Post isn't the publication to dive into this at
| all.
| throwaway894345 wrote:
| > Because the alternative, where you either allow your
| platform to be flooded with swastikas and pornography or get
| sued for every single mistake you make, is unworkable.
|
| The current model isn't workable either. Social media
| networks and their ad-driven models have hollowed out our
| democracies by sowing outrage and division at every
| opportunity. If one believes "The Social Dilemma", these
| networks have a dial for tuning public opinion that can even
| steer election outcomes (if you subscribe to the "Russia used
| bots to hack the 2016 election" theory, then you must agree).
|
| Maybe we should rethink the role of social media networks.
| Perhaps they monopolize too much communication to trust them
| to curate which voices are amplified and which are
| suppressed. The discourse around this issue has been really
| strange, with the usual critics of corporate power rallying
| to defend social media giants and their rights and
| qualifications to curate such an enormous portion of our
| collective speech. Perhaps we should consider these networks
| to be more "dumb pipes" rather than "curators", and instead
| we should expect these networks to provide us with our own
| curation/moderation mechanisms. If there really is no
| workable social media model--that is, if they really can't
| deliver some net-positive social good (or at least some
| smaller net harm, like sex work, drugs/alcohol, tobacco,
| etc), then maybe we should regulate them out of existence?
| thereare5lights wrote:
| > it should be blatantly obvious that the Twitter
| representative did not know or agree that the person in the
| video was a minor.
|
| They were given government id. This point is invalid.
| stretchcat wrote:
| > _In any case: it should be blatantly obvious that the
| Twitter representative did not know or agree that the person
| in the video was a minor_
|
| Or the moderator was [pick as many as you like]: Lazy,
| overworked, incompetent, or a scumbag.
| na85 wrote:
| >This is totally unacceptable.
|
| And yet not at all surprising. These huge companies have a
| reputation for being faceless because they skimp on staffing
| their support teams and until the so-called "tech reckoning"
| have operated with virtual impunity.
| kevwil wrote:
| Moderation seems to be a very slippery slope. IMHO there's no way
| to truly win; policies eventually trend toward either absolute
| free speech or blatant censorship. This is the critical flaw in
| social media, as either trend is problematic. "The only way to
| win the game is not to play."
| [deleted]
| phkahler wrote:
| This is critical to Twitter's crappy stance. They want to act
| like a neutral party to avoid liability (like common carrier
| status of US telco), yet they also want to censor what and who
| they feel like (via their ToS and being a private company). Their
| stance is inconsistent and this type of case may lead to
| resolution of that. Unfortunately I predict they will just add a
| few illegal things to their ToS as being twitter offenses.
| bigphishy wrote:
| Wow, I don't mean to issue an ad-hominem attack... but I do not
| consider anything from NY Post legitimate, I'd even venture to
| say NY Post is a tabloid owned by News Corporation, and surprised
| that this is on HackerNews.
| 1_player wrote:
| Sadly, as more people flock to this site, it's becoming less
| Hacker News and more like Google News.
|
| That seems to be the vast majority of stories submitted by OP,
| and the clickbait and inflammatory titles tend to rise to the
| front page.
| prasadjoglekar wrote:
| The NY Post has a well known conservative bias, but that
| doesn't make their reporting illegitimate. You can disagree
| with their opinion pieces, but the factual reporting (in this
| case - that there's a lawsuit against Twitter) can be
| objectively checked.
|
| Also as NYC'er, my take is that a lot of their opinions are
| quite accurate and correct.
| tiagod wrote:
| I've had this happen to me on Facebook. Found a disgusting video
| involving children, reported it, got a response that there's
| nothing wrong with it. It's like the people reviewing this stuff
| have only a second to decide if the content should be deleted.
| [deleted]
| gamblor956 wrote:
| OTOH, I've reported posts and had them take them down within
| the hour...
|
| The issue seems to be that some posts are reviewed by Facebook
| staff in America, and others by Facebook staff in India. The
| American staff acts quickly and is very good about taking
| violations down. The Indian staff seems to be very laissez
| faire, due to the different cultural standards for content.
| kenbolton wrote:
| Are you notified of the location of the reviewer?
| trident5000 wrote:
| Its a bunch of slave labor people in India who dont give a
| shit. Its an interesting dynamic that we have foreigners
| deciding what content Americans can and cannot see and can and
| cannot say.
| driverdan wrote:
| That's not true. FB has many US moderators, many outsourced
| to other companies.
| trident5000 wrote:
| I think the odds of US moderators being any sort of
| majority or anything close to that is slim to zero. The
| savings are astronomical and is the same reason customer
| support of all kinds, even IT help, is outsourced. US mods
| are most likely a minority and a face.
| trianglem wrote:
| Do you have any sources for those claims?
| lawnchair_larry wrote:
| Why say this? You're making things up and you happen to
| be wrong.
| scohesc wrote:
| I would assume content moderation makes Twitter
| absolutely nothing and actually just bleeds tens of
| millions of dollars annually.
|
| They'd do their best to outsource the work somewhere
| that's super cheap (India, etc.) or they'd bring in
| international talent on H1B visas en-masse since I
| believe a not insignificant amount of their wages are
| subsidized by government.
| trident5000 wrote:
| Economics always prevails and that is on the side of my
| argument. Its also industry standard. Do you have a
| source for me being wrong?
| Bud wrote:
| If the user is wrong, please substantiate your
| accusation. As far as I know, they are absolutely right.
| If they're not right, where is your source?
| everdrive wrote:
| >That's not true. FB has many US moderators
|
| Who also have terrible jobs, and impossible work queues.
| trianglem wrote:
| India has a very low cost of living. Getting paid an average
| wage for the country you live in is not "slave" labor. If you
| use that term freely for things it doesn't apply to, people
| stop taking it as seriously.
| newsbinator wrote:
| After 3 or 4 of these rejections I stopped reporting.
| devwastaken wrote:
| If a person sees it at all. Twitter and Facebook are far too
| large to moderate effectively with humans. Most things are
| caught by automated image recognition, but that doesn't work
| for images not yet in the database.
| matz1 wrote:
| Thats understandable, what you considered disgusting may not be
| conisdered disgusting by other people.
| captainredbeard wrote:
| Well, those people with a different (lesser) definition of
| disgusting w.r.t. children are called pedophiles and they are
| bad.
| javajosh wrote:
| I remember a story about someone getting arrested for "child
| porn" going through an airport when some TSA agent found a video
| of a mans kids taking a bath on his phone. I'm not sure what
| happened in that case, but it has made me a) hesitant to take
| similar video and photos of my kids (even though bathtime is
| really fun and I want memories! My parents have _plenty_ of
| bathtime pics of me as a kid, and I 'm glad!), and b) a lot more
| skeptical whenever I hear someone claim "child porn". Similar to
| how "registered sex offender" has lost a great deal of weight
| since apparently cities around the country apply this label to
| drunk people pissing on buildings.
|
| This doesn't invalidate ALL such labels, it only means that the
| label itself is NOT ENOUGH for me to assume I know what happened.
| In short, "the system" has lost its credibility with me. (The
| same applies to other labels like "convicted felon" or "ever
| arrested". The ease with which these labels are applied,
| especially as the result of a corrupt plea bargain culture, and a
| society where all LEOs have a "marshal law bubble" around them,
| ruin the effect for me. And I wish it ruined the effect for more
| people.)
| colpabar wrote:
| It seems pretty clear you did not read the article, so here
|
| _The federal suit, filed Wednesday by the victim and his
| mother in the Northern District of California, alleges Twitter
| made money off the clips, which showed a 13-year-old engaged in
| sex acts and are a form of child sexual abuse material, or
| child porn, the suit states.
|
| The teen -- who is now 17 and lives in Florida -- is identified
| only as John Doe and was between 13 and 14 years old when sex
| traffickers, posing as a 16-year-old female classmate, started
| chatting with him on Snapchat, the suit alleges.
|
| Doe and the traffickers allegedly exchanged nude photos before
| the conversation turned to blackmail: If the teen didn't share
| more sexually graphic photos and videos, the explicit material
| he'd already sent would be shared with his "parents, coach,
| pastor" and others, the suit states.
|
| Doe, acting under duress, initially complied and sent videos of
| himself performing sex acts and was also told to include
| another child in his videos, which he did, the suit claims.
|
| Eventually, Doe blocked the traffickers and they stopped
| harassing him, but at some point in 2019, the videos surfaced
| on Twitter under two accounts that were known to share child
| sexual abuse material, court papers allege._
| javajosh wrote:
| Yes, it sounds like he was forced to make actual child
| porn...because he shared nudes. And THAT was so unacceptable
| to society that he felt forced to pay the blackmail.
|
| Oh yeah, I almost forgot about the other stuff I hear about,
| like teenagers getting charged with child porn for sending
| dick picks, or rape charges because 2 16-year olds had
| (consenting) sex.
|
| All of this is insane, and if not for the systemic insanity
| around sex in America, this kid would not have been
| coercable.
| stretchcat wrote:
| This is a ridiculous argument; blackmailers can and
| routinely do blackmail people over matters that are legal
| but embarrassing. This could have happened in any country.
| Similar things _do_ occur in every country.
| javajosh wrote:
| What counts as leverage depends on the time and place.
| Not too long ago you might have blackmailed someone for
| smoking weed back in the day. Or for being born out of
| wedlock. Or for being gay (well, that one is still potent
| in a lot of places). Kids being stupid is normal, and
| something tells me that if this happened in, e.g.,
| Amsterdam, the kid would have gone straight to his
| parents and the blackmailer would have gone to jail.
| rstrstsb wrote:
| There is still plenty of Hunter Biden child porn on twitter too.
|
| Why hasn't Hunter Biden been arrested yet? (cause his dad is the
| president)
|
| Why hasn't Jack Dorsey been arrested yet for lying under oath?
| rstrstsb wrote:
| why are you downvoting? Cause there is plenty of evidence of
| blue team's son RAPING CHILDREN!!!!!
| SeriousM wrote:
| Mods please
| macinjosh wrote:
| mommy, mommy, help me!!
| fsflover wrote:
| You can flag it yourself, too, if you click on the time of
| the post.
| Gunax wrote:
| Folks, let's assume that the Twitter mod saw this and did not
| classify it as child porn. How can a twitter mod decide if it's
| child porn (as opposed to just 'porn')?
|
| I am not sure there is a proper solution for this. How can they
| verify the age of an unknown person?
| dx87 wrote:
| They could just remove it if they can't tell, same way a
| bouncer will throw you out of a bar if you don't have ID.
|
| I don't know why people are giving social media companies a
| pass on what goes on just because their job is hard. Nobody is
| holding a gun to their head and telling them that they need to
| run an unmanageable platform, they wanted this many users.
| kenjackson wrote:
| Maybe they do remove it if they can't tell. We don't know.
| Maybe they contacted the source of the video and they
| confirmed the age of the actors. Then what?
|
| I didn't even know that Twitter had porn at all. It seems
| like it might be in their best interest to remove it
| altogether. There are sites that specialize in it, that I'd
| have to imagine are better sources for most people.
| kingo55 wrote:
| And consider the volume of false claims Twitter receives each
| day.
|
| If everyone reported tweets that were CSAM just to take down a
| random post they disagree with, I'm sure moderators would make
| the occasional false negative.
| 35fbe7d3d5b9 wrote:
| This is noted in the lawsuit: the CSAM reporting
| functionality isn't available in app, but requires navigating
| to a separate webpage.
|
| The lawsuit claims that shows negligence on the part of
| Twitter for how they handle these reports, but I wonder if
| this shows Twitter takes it seriously: brigading and mass
| reporting happens constantly on the internet, so pushing that
| reporting functionality off the application increases the
| friction of false reporting something especially sensitive.
| jes wrote:
| We need effective regulation of these platforms.
|
| The regulation must be forward-looking. Trying to respond after a
| social media company has harmed the public, say by hosting
| illegal content, is not good enough. Social media companies
| cannot be allowed to say "Oh, sorry Congress, we made a mistake.
| We have now fixed it and it won't happen again."
|
| It might be that we need an agency like the FCC to regulate the
| internal operations of these companies and bring about true
| shared decision making.
|
| Attempts to exclude regulatory authorities from meetings should
| result in criminal charges and prison time for repeat offenders.
| throwawayosiu1 wrote:
| ah, maybe it's time we deplatform Twitter no?
|
| maybe it's tongue in cheek but in all honesty, I find it
| extremely hypocritical that the Ayatollah of Iran, dis-
| information news organizations (esp those based in China) and now
| CP are fine on Twitter but god forbid some people on the right
| wing use the "#notmypresident" or "#learntocode" hashtags - both
| of which were extensively used by the left in 2016 without any
| repercussions what so ever.
|
| I don't mind Twitter having it's policies (Infact I support it)-
| but selective enforcement of said policies is the issue.
| phnofive wrote:
| The lawsuit digs into the details of the timeline - from initial
| report by the plaintiff to removal of the content took nine days
| (with the help of law enforcement) - which makes me wonder how
| much merit there is to the suit, Twitter's fumble
| notwithstanding.
| DanBC wrote:
| I have to say, I don't quite understand why services find it so
| difficult to remove this material. This has been a long standing
| problem with Twitter. They've improved a bit, but there's still
| problems.
|
| Young people approach these platforms and say "here are some
| images of child sexual abuse that you're hosting. I know they're
| CSE because I'm the subject of the photos and I was <18 at the
| time". The platforms sometimes ignore them. Children are then
| stuck, not knowing how to get this "child porn"[1] taken down.
|
| We need to help young people understand what routes are then
| available to them.
|
| 1) Write to the legal department. Twitter does not make this
| easy. Their "contact us" form doesn't have a section for "I want
| to report CSE material". https://help.twitter.com/en/contact-us
| But the post addresses are: Twitter, Inc.
| c/o Trust & Safety - Legal Policy 1355 Market Street, Suite
| 900 San Francisco, CA 94103 Twitter
| International Company c/o Trust & Safety - Legal Policy
| One Cumberland Place Fenian Street Dublin 2 D02
| AX07 Ireland
|
| 2) Contact their local law enforcement office, and point them to
| this page (for Twitter, but all services should have something
| similar) https://help.twitter.com/en/rules-and-policies/twitter-
| law-e...
|
| 3) Contact IWF or CEOP. IWF will generate hashes of the images
| and Twitter will, eventually, use those hashes to remove the
| images. This will take some time. https://www.iwf.org.uk/
|
| [1] I'm only using that term for the Google search term.
| tomjen3 wrote:
| If they know they are hosting CP, are they legally responsible
| for it?
| worldofmatthew wrote:
| I know some people cited section 223, allowing for prison for
| up to two years and or fined.
| LatteLazy wrote:
| You can literally claim anything in a lawsuit. Let's wait for
| some evidence or a judgement or a reply?
| Bud wrote:
| So we're posting sensationalist, hyper-inflated claptrap from the
| Murdoch-rag NY Post on Hacker News now?
|
| Let's not, instead. If this is really a story, and if that
| headline is really justified, then a more reliable source can be
| found.
|
| Set phasers to flag and fire.
| edbob wrote:
| The Hunter Biden laptop story has been confirmed by witnesses
| and the cryptographic signatures on the email, while not a
| single shred of evidence that it's a "plant" has been found.
| This should give the NYPost a huge boost in credibility over
| the media outlets that falsely claim that it was a plant. This
| would (sadly) make the NYPost one of the more reputable media
| outlets that we have.
|
| Besides that, the lawsuit is readily available:
| https://www.scribd.com/document/491614044/Doe-v-Twitter
| throwawaysea wrote:
| I've noticed that Twitter is very arbitrary in terms of what they
| moderate and don't moderate. I've seen accounts for antifa groups
| or other groups engaged in criminality and violence several
| times, and no action has been taken on my reports. Invariably,
| the reports that aren't acted upon are ones involving groups that
| align with left-leaning political sentiments. This type of
| selective enforcement of rules is extremely unjust, and I am not
| at all surprised to see this inconsistent enforcement in this
| instance either.
| 35fbe7d3d5b9 wrote:
| Funny, a common refrain of those very accounts is "Twitter
| overmoderates left wing accounts and does nothing with right
| wing hate speech".
|
| I think this doesn't point to Twitter being arbitrary; instead,
| I think it points to Twitter doing absolutely nothing until its
| hand is forced.
| d33lio wrote:
| Even as someone who's glad Biden is now president - it's a bit
| unnerving to be living in a time where a service bans the sitting
| united states president without batting an eye but engages in
| legal battles to defend the distribution of child pornography on
| their "open platform".
|
| I guess people posting spicy takes on government happens to be
| more of an active risk to society than literal child predators?
| ketamine__ wrote:
| People are banned from HN all the time for being rude. Does the
| reason even matter though? Twitter can decide who uses its
| services and who doesn't as long as it doesn't violate existing
| laws.
| rosmax_1337 wrote:
| You shouldn't compare a small and niche community like HN to
| a website like Twitter.
|
| Twitter is the web equivalent of a public square, where
| everyone gets an opportunity at a speakers corner, HN is the
| web equivalent of a club house, or something private like
| that. There are laws governing public spaces all around the
| world, like town squares, where people are guaranteed rights
| to speak. Laws like these need to be adapted to fit the
| internet as well, asap IMO.
|
| As much as you might like to think that the Town square
| analogy is incorret, because the protocol HTTPS is the real
| Town square, and Twitter is really just another club house.
| (except very big) I also disagree with that, because of the
| nature of social media and platform monopoly. There will ever
| really only be one "youtube", one "facebook" and one
| "twitter", certainly nowadays when these social media
| platforms have grown to such an extent, covering the entire
| globe, let alone nations! This is because _the power of a
| social media platform lies in it being social_ i.e., the
| place where you find other people.
|
| The platforms on top, stay on top.
| ketamine__ wrote:
| A lot of Twitter users are moving their followers to
| Substack. Check out what Balaji is doing.
|
| The social network MeWe is growing rapidly. There is always
| room for competition but the people complaining censorship
| are defeatist.
| rosmax_1337 wrote:
| Not to fall into the category of being "defeatist" even
| more, but platforms that actually pose a threat to the
| established social media channels (like Parler did), get
| a very special kind of treatment by the "FAANG-cartel" as
| you might have noticed.
|
| The solution is political change, imo, irregardless of
| all the problems that come with policing a multinational
| website like Twitter.
| ketamine__ wrote:
| I think the argument can be made MeWe and Substack
| actually pose more of a threat to the established order
| because they are mainstream.
|
| Parler isn't competing with Twitter or Facebook because
| the average person doesn't want to see a feed full of
| white supremacist's discussing conspiracy theories.
| lawnchair_larry wrote:
| You're far more likely to see that on Twitter than you
| were on Parler.
| ketamine__ wrote:
| It's concentrated on Parler. Are there any liberal or
| moderate groups using their service?
|
| Parler's only redeeming quality is that it had all of the
| footage from the attack on the Capitol. It would have
| made a good honeypot but now it looks to be weaponized as
| a Russian disinformation project.
| SpicyLemonZest wrote:
| I don't really think that's a fair description of the story
| here. Twitter hasn't yet engaged in any legal battle; this
| lawsuit was filed just yesterday and they haven't responded.
| jeffbee wrote:
| They didn't ban Trump "without batting an eye" they dickered
| about it for four entire years while he trampled all over their
| AUP and they didn't bother banning him until he literally tried
| to incite a civil war, and even then it's clear that the only
| reason they banned him is that it didn't work. If somehow
| Trump's ridiculous little revolt had succeeded in installing
| him as president again, Twitter would have kept him too.
| mansion7 wrote:
| Did you read the tweets they used as justification?
|
| A man simply saying "I'm not going to this event" allows them
| to suddenly become mind-readers and clairvoyants, able to not
| only read one's mind to determine what they REALLY meant, but
| also how others in the future will interpret it. Even
| Nostradamus didn't possess such foresight.
|
| Yet actual, obvious child sexual exploitation is seemingly a
| low priority for them.
|
| It's just interesting, their priorities. But not exactly a
| mystery.
| anonAndOn wrote:
| Different take: they only banned him because he was headed
| out the door. Had Trump been re-elected, we likely would've
| seen 4 more years of dithering so as not to piss off a man
| who could make their lives very stressful.
| jeffbee wrote:
| I think that's the same take. Twitter waited until Trump
| was definitely deposed before they took action.
| draw_down wrote:
| Good thing they got Trump off of there, though! Whew.
| rendall wrote:
| It's a story that's so completely outside of my personal
| experience that I cannot evaluate it
|
| In 14 years of holding a Twitter account, I've never seen
| anything even close to such things on Twitter, and wouldn't use
| the platform if I did. If I were to see child pornography on
| Twitter, I would immediately report it to law enforcement and
| close my account
|
| Well, I hope that poor child wins justice.
| marcinzm wrote:
| Welcome to the filter bubble. What you see on twitter,
| facebook, google, youtube, etc. is going to be radically
| different than what someone else sees.
| fernandotakai wrote:
| i've been on twitter for the same time and last year was the
| first time i saw people talking about "map" which apparently
| means "minor attracted person"[0].
|
| while searching around i found that there's a LOT of twitter
| accounts basically spewing pedophilia-related/adjacent stuff. i
| reported every single one of them and... nothing was done. not
| a single one was removed.
|
| [0]https://en.wikipedia.org/wiki/Minor-attracted_person
| baby wrote:
| Write an article about it
| fernandotakai wrote:
| there already articles about it.
|
| according to one of them, twitter changed their TOS to
| allow people to discuss their attraction to minors on their
| platform[0].
|
| people always take about "dog whistles" -- imho this is a
| big one, at least for me. allowing pedophiles to discuss
| their attraction to minors openly on your platform is
| completely absurd.
|
| [0]https://thenextweb.com/socialmedia/2020/01/09/twitter-
| lets-p...
| stickfigure wrote:
| This reminds me of an old episode of This American Life:
|
| https://www.thisamericanlife.org/522/tarred-and-feathered
|
| _There 's one group of people that is universally tarred
| and feathered in the United States and most of the world.
| We never hear from them, because they can't identify
| themselves without putting their livelihoods and
| reputations at risk. That group is pedophiles. It turns
| out lots of them desperately want help, but because it's
| so hard to talk about their situation it's almost
| impossible for them to find it. Reporter Luke Malone
| spent a year and a half talking to people in this
| situation, and he has this story about one of them._
|
| I remember it being a pretty remarkable episode, and kind
| of heartbreaking. More insightful and constructive than
| the usual tone of moral outrage that the subject is
| treated with.
|
| From reading your article, it seems that Twitter's policy
| changed to allow discussion of the subject with the
| caveat that pedophilia could not be promoted or
| glorified. That seems pretty reasonable, and isn't a dog-
| whistle for anything.
| throw_m239339 wrote:
| Start reading a single (non porn related) message from one
| these creeps and your Twitter timeline will be inundated with
| illegal porn content in no time. Twitter is absolutely rife
| with that horrible stuff, and Twitter certainly does the
| minimum to remove any of that, as demonstrated by that article.
|
| Why even give the benefit of the doubt? Someone reports CP,
| remove it, period. Of course, Twitter makes money off it...
|
| People claim Twitter can't possibly moderate content at scale,
| except that Twitter makes money at that same scale. Social
| media can't have it both way, especially when it comes to CP.
| isochronous wrote:
| "People claim Twitter can't possibly moderate content at
| scale, except that Twitter makes money at that same scale."
|
| I'm sorry, but is that supposed to be a logical argument?
| Because it doesn't actually make any sense. Twitter is a
| platform that allows pretty much anyone with an internet
| connection to post content. There were, on average, 500
| million tweets posted per day last year.
|
| So on one side you have the set of potential content
| creators, churning out half a billion tweets per day, and
| that number will almost certainly continue to steadily
| increase. So, as a company with a set amount of income, and
| who is beholden to its shareholders, what's your plan to
| moderate 500 million tweets per day while still turning a
| reasonable profit?
| mcguire wrote:
| Under what rules is Twitter required to turn a reasonable
| profit? (What is a "reasonable" profit?)
| isochronous wrote:
| Are you familiar with how publicly traded corporations
| work?
| briandear wrote:
| And yet the App Store hasn't banned Twitter as they did
| Parler. If the media started reporting about pedophilic
| content on Twitter and that Twitter's TOS explicitly
| allowed pedophiles to discuss their attractions, would the
| tech gatekeepers continue to allow Twitter? Because this
| stuff isn't a secret, but Twitter hasn't been banned which
| makes it pretty clear that the Parler and related bans were
| politically motivated rather than protecting people from
| harmful content.
|
| But we let Twitter get away with these things because the
| Blue Checks are mostly leftist or hard-left politically and
| they'd lose their minds if Twitter were banned from app
| stores.
| isochronous wrote:
| That's because twitter actually has content moderation
| policies in place that they do their best to apply.
| They're obviously not perfect, but again, the whole
| reason the app store banned parler was that they had no
| workable moderation plan in place. Twitter does.
| throw_m239339 wrote:
| > I'm sorry, but is that supposed to be a logical argument?
|
| Yes, because these social medias can't have it both ways, I
| already addressed that. If you make money at scale, well
| you are responsible for moderation at scale, period.
|
| Your argument is just apologizing for Twitter's bad
| behaviour when it comes to illegal content moderation.
| al_chemist wrote:
| > There were, on average, 500 million tweets posted per day
| last year.
|
| How many of them had photos and were reported as crime?
| 1_2__4 wrote:
| I guess in 2021, HNers think the NY Post is a legitimate media
| outlet straining for objectivity, rather than the tabloid rag
| that they are. And I guess we'll all have a highly partisan
| discussion over it despite the Post obviously having a huge axe
| to grind against Twitter, further calling into question their
| "reporting".
| lawnchair_larry wrote:
| So you're saying that NY Post fabricated the lawsuit that has
| been filed in court? Having an axe to grind doesn't turn a fact
| into a falsehood.
| the_drunkard wrote:
| The victim actually took steps to have content removed and
| Twitter failed to do so (initially). I wonder if payment vendors
| will give Twitter the "Pornhub treatment" and de-platform their
| access to financial services.
|
| > Finally on Jan. 28, Twitter replied to Doe and said they
| wouldn't be taking down the material, which had already racked up
| over 167,000 views and 2,223 retweets, the suit states.
|
| > "Thanks for reaching out. We've reviewed the content, and
| didn't find a violation of our policies, so no action will be
| taken at this time," the response reads, according to the
| lawsuit.
| d1zzy wrote:
| I hope this doesn't result in Twitter banning all adult content
| which seems to be how all platforms handle pedophilia
| lawsuits/legal pressure these days.
| Nextgrid wrote:
| The whole deplatforming thing is mostly virtue-signalling.
|
| PornHub is an easy target and the people & companies involved
| in its deplatforming do not need it in any way (at least not in
| a way they would publicly admit - I'm sure some of them do
| consume its content) so it's an easy call to make and can
| gather significant support from certain conservative and
| religious circles.
|
| Twitter on the other hand is near-essential to most brands and
| media outlets, so the virtue-signalling benefit from
| deplatforming it is minuscule compared to the loss (the people
| most vocal in support of PornHub's deplatforming would be the
| first ones out of a job), not to mention that virtue-signalling
| only works if you have a place to brag about your action - if
| you deplatform Twitter, where are you going to brag about it?
| trianglem wrote:
| Is virtue signaling being a good person through your actions?
| psyc wrote:
| It can appear that way, sure. But it usually means
| broadcasting your 'goodness' visibly, to ingratiate
| yourself with certain people. It can also mean seeking to
| establish one's moral superiority, creating a power
| imbalance for offensive or defensive purposes.
| mansion7 wrote:
| Not in my experience. That's why the distinction is made
| between "virtue signalling" and "actually being virtuous".
|
| It's more akin to showing up to a date wearing fancy
| clothes and driving an expensive, but borrowed car. Giving
| the symbols of wealth while possessing none, to fool an
| audience.
|
| Many of those I've seen signalling their virtue the loudest
| possess the least.
|
| The reason for the growth and awareness of the phenomena?
| In prior times, one must perform virtuous actions to appear
| virtuous. Now, it costs nothing, takes no effort, and
| carries no risk; it's as simple as typing 140 characters
| into a phone screen.
| selfishgene wrote:
| Think of what Jeffrey Epstein was obtaining from MIT
| president Rafael Reif in the form of a personally signed
| thank-you note in exchange for donations to the MIT Media
| Lab that started back in 2002 with co-founder and accused
| co-pedophile Marvin Minsky that continued after his
| conviction on child rape charges in 2008 while under
| investigation by the FBI for violations of the Mann Act
| during "Operation Leap Year."
|
| Epstein's donations to science were a form of "virtue
| signaling" designed to help him evade prosecution on
| federal racketeering charges for the sex-trafficking of
| minors.
|
| They guy had never earned as much as an undergraduate
| degree in science.
| fakedang wrote:
| Unfortunately, mainstream media won't pick on it, nor will the
| story gain traction because it is published in the NY Post. (if
| it is true)
|
| Edit:- to the folks down voting me, please show me some major
| outlets (NYT, BBC, etc) reporting on this. As far as we are
| concerned, this kind of stuff _should_ be news.
| isochronous wrote:
| There's more than one explanation as to why "mainstream
| media" might not run a story. The NY Post isn't exactly known
| as a bastion of credible journalism. That's why they ran the
| Hunter Biden laptop story when everyone else passed because
| they couldn't verify it.
| briandear wrote:
| Couldn't verify? Or refused to try?
| Bud wrote:
| Tried, hard, and found that it was not substantive.
| aerosmile wrote:
| I agree with the overall sentiment of your post, but using
| the Hunter Biden laptop story as an example is an
| unfortunate choice. There's a popular view that politics
| had a lot to do with who picked up the story and who
| didn't, as opposed to just the strength of the case itself.
| I would recommend picking a less controversial example that
| highlights the shortcomings of NY Post's journalism, such
| as their reporting on the Boston Bombers [1].
|
| [1] https://archives.cjr.org/the_audit/the_new_york_posts_d
| isgra...
| Applejinx wrote:
| Or, it could be a garbage story manufactured by liars,
| implying that of all the outlets not running it, there's
| a chance they had looked into it and decided it was a
| garbage story manufactured by liars.
|
| Sometimes a thing that fails, fails because it's bad or
| disingenuous or both. It's possible this is a garbage
| suit that won't go anywhere because it's a garbage suit.
| isochronous wrote:
| Yeah, there are a lot of "popular views" that are
| complete and utter nonsense
| Bud wrote:
| Ahem. You mean _alleged_ (by Rudy Giuliani, the least
| credible source in the continental United States)
| "Hunter Biden" laptop.
| shakna wrote:
| > Edit:- to the folks down voting me, please show me some
| major outlets (NYT, BBC, etc) reporting on this. As far as we
| are concerned, this kind of stuff should be news.
|
| The story is only a few hours old. "Mainstream" media often
| requires a level of verification that may take a few more
| days before you see their stories happen.
| splaytreemap wrote:
| The mainstream media won't pick this up because they like
| Twitter. If Facebook had done this, WaPo and CNN would be all
| over it.
| Bud wrote:
| Real media don't make decisions on whether to publish based
| on whether something also appears in the NY Post. That's a
| ludicrous assertion.
|
| What's actually happening: Murdoch rags like the Post have
| zero editorial standards, whereas others are attempting to
| actually vet this story before running it.
| newbie578 wrote:
| Now then, I expect a swift retribution from other companies to do
| the "right" thing and banish Twitter from our reach.
|
| Remove them from Google searches, de-platform them from the App
| Store and cut their access to hosting services.
|
| They have failed to moderate their platform, in fact in this case
| it seems they did on the contrary, so it is only just they get
| their part of punishment.
| isochronous wrote:
| Truly a disingenuous argument.
|
| Parler didn't get TOS'd because they had things fall through
| their moderation process; they got TOS'd because they did not
| have a workable moderation process in place, period. Twitter
| obviously does make mistakes - both AIs and humans are fallible
| - but they have about as good a moderation process in place as
| it's possible to have with the amount of message traffic they
| process.
|
| Also, I'd like to point out that the only evidence WE'VE seen
| of this so far are a few claims made in a lawsuit.
| Sohcahtoa82 wrote:
| The lack of moderation on Parler was considered a _feature_.
|
| Except it actually did have moderation, but it was leftists
| voices that were silenced.
| [deleted]
| supercanuck wrote:
| This all sounds reasonable to me. Seems like the victim's first
| recourse should be through the Law not Twitter.
|
| >but the tech giant failed to do anything about it until a
| federal law enforcement officer got involved, the suit states.
|
| >A support agent followed up and asked for a copy of Doe's ID so
| they could prove it was him and after the teen complied, there
| was no response for a week, the family claims.
|
| >Only after this take-down demand from a federal agent did
| Twitter suspend the user accounts that were distributing the CSAM
| and report the CSAM to the National Center on Missing and
| Exploited Children," states the suit, filed by the National
| Center on Sexual Exploitation and two law firms.
|
| EDIT: Here is the timeline as far as I can tell.
|
| Dec 25, 2019: John Doe Becomes aware of Content on Twitter
|
| january 2020: John Doe and Fam: Report Content for Breaking
| Twitter Policies.
|
| January 28,2020: Twitter doesn't think the content breaks its
| policies
|
| January 30, 2020: Law Enforcement contacts Twitter to Remove the
| Content. Content is removed.
| jlkuester7 wrote:
| I am confused. What part of "We've reviewed the content, and
| didn't find a violation of our policies, so no action will be
| taken at this time" seems reasonable? Personally this kind of
| content moderation seems to be the most important and it is
| absurd that Twitter should wait to remove this content until
| they are contacted by law enforcement.
|
| Certainly the best thing might have been a "both/and" approach
| where the victim contacts both Twitter and law enforcement
| without waiting to hear back from either. But, contacting
| Twitter directly should be the fasted way to get cp removed
| from their platform....
| [deleted]
| violetgarden wrote:
| If it was me, I'd have probably gone straight to Twitter too.
| My thought would be that they're hosting the content, so they'd
| probably be able to get it removed the fastest. It also
| wouldn't occur to me that an agent wouldn't find CP to violate
| their terms. I'm surprised that Twitter didn't take it down as
| a CYA precaution while they verified it to be CP or not.
| [deleted]
___________________________________________________________________
(page generated 2021-01-21 23:01 UTC)