[HN Gopher] Judges rule Big Tech's free ride on Section 230 is over
___________________________________________________________________
Judges rule Big Tech's free ride on Section 230 is over
Author : eatonphil
Score : 273 points
Date : 2024-08-29 15:33 UTC (7 hours ago)
(HTM) web link (www.thebignewsletter.com)
(TXT) w3m dump (www.thebignewsletter.com)
| Devasta wrote:
| This could result in the total destruction of social media sites.
| Facebook, TikTok, Youtube, Twitter, hell even Linkedin cannot
| possibly survive if they have to take responsibility for what
| users post.
|
| Excellent news, frankly.
| thephyber wrote:
| I don't understand how people can be so confident that this
| will only lead to good things.
|
| First, this seems like courts directly overruling the explicit
| wishes of Congress. As much as Congress critters complain about
| DCA Sec230, they can't agree on any improvements. Judges
| throwing a wrench at it won't improve it, they will only cause
| more uncertainty.
|
| not liking what social media has done to people doesn't seem
| like a good reason to potentially destroy the entire corpus of
| videos created on YouTube.
| gizmo686 wrote:
| Congress did not anticipate the type of algorithmic curation
| that the modern internet is built on. At the time, if you
| were to hire someone to create a daily list of suggested
| reading, that list would not be subject to 230 protections.
| However, with the rise of algorithmic media, that is
| precisely what modern social media companies have been doing.
| Devasta wrote:
| Well, if we consider the various social media sites:
|
| Meta - Helped facilitate multiple ethnic cleansings.
|
| Twitter - Now a site run by white supremacists for white
| supremacists.
|
| Youtube - Provides platforms to Matt Walsh, Ben Shapiro and a
| whole constellation of conspiracy theorist nonsense.
|
| Reddit - Initially grew its userbase through hosting of
| softcore CP, one of the biggest pro-ana sites on the web and
| a myriad of smaller but no less vile subreddits. Even if they
| try to put on a respectable mask now its still a cesspit.
|
| Linkedin - Somehow has the least well adjusted userbase of
| them all, its destruction would do its users a kindness.
|
| My opinion of social media goes far and beyond what anyone
| could consider "not liking".
|
| In any case, it would mean that those videos would have to be
| self hosted and published, we'd see an en masse return of
| websites like college humor and cracked and the like, albeit
| without the comments switched on.
| falcolas wrote:
| YouTube and Facebook were also the original hosts of the
| Blackout trend videos and pictures, as I recall.
| bryanlarsen wrote:
| No, 230 is not overturned.
|
| The original video is still the original poster's comment,
| and thus still 230 protected. If the kid searched
| specifically for the video and found it, TikTok would have
| been safe.
|
| However, TikTok's decision to show the video to the child is
| TikTok's speech, and TikTok is liable for that decision.
|
| https://news.ycombinator.com/item?id=41392710
| falcolas wrote:
| If the child hears the term "blackout" and searches for it
| on TikTok and reaches the same video, is that TikTok's
| speech - fault - as well? TikTok used an algorithm to sort
| search results, after all.
| preciousoo wrote:
| I think the third sentence of the comment you're replying
| to answers that
| falcolas wrote:
| So you believe that presenting the results (especially if
| you filter on something like 'relevance') of a search now
| makes the website liable?
|
| That's going to be hell for Google. Well, maybe not, they
| have many and decent lawyers.
| preciousoo wrote:
| I'm not sure you read the sentence in question correctly
| falcolas wrote:
| > However, TikTok's decision to show the video to the
| child is TikTok's speech, and TikTok is liable for that
| decision.
|
| How is my interpretation incorrect, please? TikTok (or
| any other website like Google) can show a video to a
| child in any number of ways - all of which could be
| considered to be their speech.
| supernewton wrote:
| The third sentence is "If the kid searched specifically
| for the video and found it, TikTok would have been safe."
| falcolas wrote:
| Aah, I counted paragraphs - repeatedly - for some reason.
| That's my bad.
|
| That said, this is a statement completely unsubstantiated
| in the original post or in the post that it links to, or
| the decision in TFA. It's the poster's opinion stated as
| if it were a fact or a part of the Judge's ruling.
| bryanlarsen wrote:
| You're right, I did jump to that conclusion. It turns out
| it was the correct conclusion, although I definitely
| shouldn't have said it.
|
| https://news.ycombinator.com/item?id=41394465
| ndiddy wrote:
| From page 11 of the decision:
|
| "We reach this conclusion specifically because TikTok's
| promotion of a Blackout Challenge video on Nylah's FYP
| was not contingent upon any specific user input. Had
| Nylah viewed a Blackout Challenge video through TikTok's
| search function, rather than through her FYP, then TikTok
| may be viewed more like a repository of third-party
| content than an affirmative promoter of such content."
| karaterobot wrote:
| The person you're responding to didn't say they were
| confident about anything, they said (cynically, it seems to
| me) that it _could_ lead to the end of many social media
| sites, and that 'd be a good thing in their opinion.
|
| This is a pedantic thing to point out, but I do it because
| the comment has been downvoted, and the top response to it
| seems to misunderstand it, so it's possible others did too.
| Mistletoe wrote:
| The return of the self hosted blog type internet where we go to
| more than 7 websites? One can dream. Where someone needs an IQ
| over 70 to post every thought in their head to the universe?
| Yes that's a world I'd love to return to.
| falcolas wrote:
| Nah, ISPs (and webhosts) are protected by Section 230 as
| well, and they're likely to drift into the lawyer's sights as
| well - intentionally or unintentionally.
| krapp wrote:
| >Where someone needs an IQ over 70 to post every thought in
| their head to the universe? Yes that's a world I'd love to
| return to.
|
| I remember the internet pre social media but I don't exactly
| remember it being filled with the sparkling wit of genius.
|
| The internet is supposed to belong to everyone, it wasn't
| meant to be a playground only for a few nerds. It's really
| sad that hacker culture has gotten this angry and elitist. It
| means no one will ever create anything with as much
| egalitarian potential as the internet again.
| bentley wrote:
| But it's more likely to go the other way around: the big sites
| with their expensive legal teams will learn how to thread the
| needle to remain compliant with the law, probably by
| oppressively moderating and restricting user content even more
| than they already do, while hosting independent sites and
| forums with any sort of user-submitted content will become
| completely untenable due to the hammer of liability.
| WCSTombs wrote:
| There's nothing in the article about making the social media
| sites liable for what their users post. However, they're made
| liable for how they recommend content to their users, at least
| in certain cases.
| krapp wrote:
| Negative externalities aside, social media has been the most
| revolutionary and transformative paradigm shift in mass
| communication and culture since possibly the invention of the
| telegraph. Yes something that provides real value to many
| people would be lost if all of that were torn asunder.
| skydhash wrote:
| You're missing radio and TV. Social media is mostly giving
| everyone a megaphone and the platform the one in control of
| the volume.
| mikewarot wrote:
| What is likely to happen is that Government will lean on
| "friendly" platforms that cooperate in order to do political
| things that should be illegal, in exchange for looking the
| other way on things the government should stop. This is the
| conclusion I came to after watching Bryan Lunduke's reporting
| on the recent telegram arrest.[1]
|
| [1] https://www.youtube.com/watch?v=7wm-Vv1kRk8
| tboyd47 wrote:
| Fantastic write-up. The author appears to be making more than a
| few assumptions about how this will play out, but I share his
| enthusiasm for the end of the "lawless no-man's-land" (as he put
| it) era of the internet. It comes at a great time too, as we're
| all eagerly awaiting the AI-generated content apocalypse. Just
| switch one apocalypse for a kinder, more human-friendly one.
|
| > So what happens going forward? Well we're going to have to
| start thinking about what a world without this expansive reading
| of Section 230 looks like.
|
| There was an internet before the CDA. From what I remember, it
| was actually pretty rad. There can be an internet after, too. Who
| knows what it would look like. Maybe it will be a lot less
| crowded, less toxic, less triggering, and less addictive without
| these gigantic megacorps spending buku dollars to light up our
| amygdalas with nonsense all day.
| tboyd47 wrote:
| I read the decision. ->
| https://cases.justia.com/federal/appellate-courts/ca3/22-306...
|
| Judge Matey's basic point of contention is that Section 230
| does not provide immunity for any of TikTok's actions except
| "hosting" the blackout challenge video on its server.
|
| Defining it in this way may lead to a tricky technical problem
| for the courts to solve... While working in web, I understand
| "hosting" to mean the act of storing files on a computer
| somewhere. That's it. Is that how the courts will understand
| it? Or does their definition of hosting include acts that I
| would call serving, caching, indexing, linking, formatting, and
| rendering? If publishers are liable for even some of those
| acts, then this takes us to a very different place from where
| we were in 1995. Interesting times ahead for the industry.
| itsdrewmiller wrote:
| You're reading it too literally here - the CDA applies to:
|
| >(2) Interactive computer service The term "interactive
| computer service" means any information service, system, or
| access software provider that provides or enables computer
| access by multiple users to a computer server, including
| specifically a service or system that provides access to the
| Internet and such systems operated or services offered by
| libraries or educational institutions.
| tboyd47 wrote:
| What definition of "hosting" do you think the courts would
| apply instead of the technical one?
| jen20 wrote:
| I'd imagine one that reasonable people would understand
| to be the meaning. If a "web hosting" company told me
| they only stored things on a server with no way to serve
| it to users, I'd laugh them out the room.
| tboyd47 wrote:
| Good point
| itsdrewmiller wrote:
| "hosting" isn't actually used in the text of the relevant
| law - it's only shorthand in the decision. If they want
| to know what the CDA exempts they would read the CDA
| along with caselaw specifically interpreting it.
| tboyd47 wrote:
| True
| chucke1992 wrote:
| So basically closer and closer to governmental control over
| social networks. Seems like a global trend everywhere.
| Governments will define the rules by which communication services
| (and social networks) should operate.
| whatshisface wrote:
| Given that the alternative was public control over governments,
| I guess it's inevitable that this would become a worldwide
| civil rights battle.
| zerodensity wrote:
| What does public control over governments mean?
| whatshisface wrote:
| It means that the process of assimilating new information,
| coming to conclusions, and deciding what a nation should do
| is carried out in the minds of the public, not in the
| offices of relatively small groups who decide what they
| want the government to do, figure out what conclusions
| would support it, and then make sure the public only
| assimilates information that would lead them to such
| conclusions.
| titusjohnson wrote:
| Is it really adding governmental control, or is it removing a
| governmental control? From my perspective Section 280 was
| controlling me, a private citizen, by saying "you cannot touch
| these entities"
| passwordoops wrote:
| How is an elected government with checks and balances worse
| than a faceless corporation?
| genocidicbunny wrote:
| The government tends to have a monopoly on violence, which is
| quite the difference. A faceless corporation will have a
| harder time fining you, garnishing your wages, charging you
| with criminal acts. (For now at least...)
| lcnPylGDnU4H9OF wrote:
| Conversely, the US government in particular will have a
| harder time with bans (first amendment), shadow bans (sixth
| amendment), hiding details about their recommendation
| algorithms (FOIA). The "checks and balances" part is
| important.
| mrguyorama wrote:
| >The government tends to have a monopoly on violence
|
| They don't literally, as can be seen by that guy who got
| roughed up by the Pinkertons for the horror of accidentally
| being sent a Magic card he shouldn't have been.
|
| Nobody went to jail for that. So corporations have at least
| as much power over your life as the government, and you
| don't get to vote out corporations.
|
| Tell me, how do I "choose a different company" with, for
| example, Experian, who keeps losing my private info,
| refuses to assign me a valid credit score despite having a
| robust financial history, and can legally ruin my life?
| aidenn0 wrote:
| > They don't literally, as can be seen by that guy who
| got roughed up by the Pinkertons for the horror of
| accidentally being sent a Magic card he shouldn't have
| been.
|
| Source for that?
|
| I found [1] which sounds like intimidation; maybe a case
| for assault depending on how they "frightened his wife"
| but nothing about potentiall battery, which "roughed up"
| would seem to imply. The Pinkertons do enough shady stuff
| that there's not a need to exaggerate what they do.
|
| 1: https://www.polygon.com/23695923/mtg-aftermath-
| pinkerton-rai...
| bentley wrote:
| A faceless corporation can't throw me in jail for hosting an
| indie web forum.
| mikewarot wrote:
| A faceless corporation could be encouraged to use it's
| algorithm for profit in a way that gets you killed.... as
| was the main point of the article.
| krapp wrote:
| That's far more abstract than sending men with guns to
| your house.
| falcolas wrote:
| That can also be done today by way of the government (at
| least in the US): swatting.
|
| To be a bit cliched, there's a rather a lot of
| inattention and time that lets a child kill themselves
| after watching a video.
| tedunangst wrote:
| So the theory is the girl in question was going to start
| competing with TikTok, so they showed a suicide inducing
| video to silence her?
| nradov wrote:
| True, but this particular case and Section 230 are only
| about _civil_ liability. Regardless of the final outcome
| after the inevitable appeals, no one will go to jail. At
| most they 'll have to pay damages.
| falcolas wrote:
| > no one will go to jail
|
| Did you know that there's has been a homeowner jailed for
| breaking his HOA's rules about lawn maintenance?
|
| The chances are good that someone will go to jail.
| nradov wrote:
| I don't know that because it's obviously false. If
| someone was jailed in relation to such a case then it was
| because they did something way beyond violating the HOA
| CC&Rs, such as assaulting an HOA employee or refusing to
| comply with a court order. HOAs have no police powers and
| private criminal prosecutions haven't been allowed in any
| US state for many years.
|
| Citation needed.
| falcolas wrote:
| Google is your friend. Sorry to be so trite, but there
| are literally dozens upon dozens of sources.
|
| One such example happened in 2008. The man's name is
| "Joseph Prudente", and he was jailed because he could not
| pay the HOA fine for a brown lawn. Yes, there was a judge
| hitting Joseph Prudente with a "contempt of court" to
| land him in jail (with an end date of "the lawn is fixed
| or the fine is paid"), but his only "crime" was ever
| being too poor to maintain his lawn to the HOA's
| standards.
|
| > "It's a sad situation," says [HOA] board president Bob
| Ryan. "But in the end, I have to say he brought it upon
| himself."
| nradov wrote:
| It's not my job to do your legal research for you and
| you're misrepresenting the facts of the case.
|
| As I expected, Mr. Prudente wasn't jailed for violating a
| HOA rule but rather for refusing to comply with a regular
| court order. It's a tragic situation and I sympathize
| with the defendant but when someone buys property in an
| HOA they agree to comply with the CC&R. If they
| subsequently lack the financial means to comply then they
| have the option of selling the property, or of filing
| bankruptcy which would at least delay most collections
| activities. HOAs are not charities, and poverty is not a
| legally valid reason for failing to meet contractual
| obligations.
| falcolas wrote:
| So, having a bad lawn is ultimately worse than being
| convicted of a crime, maybe even of killing someone,
| since there's no sentence. There's no appeal. There's no
| concept of "doing your time". Your lawn goes brown, and
| you can be put in jail forever because they got a court
| order which makes it all perfectly legal.
|
| > It's not my job to do your legal research for you and
| you're misrepresenting the facts of the case.
|
| So, since it's not your job, you're happy to be ignorant
| of what can be found with a simple Google search? It's
| not looking up legal precedent or finding a section in
| the reams of law - it's a well reported and repeated
| story.
|
| And let's be honest with each other - while by the letter
| of the law he was put into jail for failing to fulfill a
| court order, in practice he was put into jail for having
| a bad lawn. I'll go so far to assert that the bits in
| between don't really matter, since the failure to
| maintain the lawn lead directly to being in jail until
| the lawn was fixed.
|
| So no, we don't have a de jure debtor's prison. But we do
| have a de facto debtor's prison.
| dehrmann wrote:
| I can sue the corporation. I can start a competing
| corporation.
|
| Elected governments also aren't as free as you'd think. Two
| parties control 99% of US politics. Suppose I'm not a fan of
| trade wars; both parties are in favor of them right now.
| pixl97 wrote:
| >I can sue the corporation. I can start a competing
| corporation.
|
| Ah, the libertarian way.
|
| I, earning $40,000 a year will take on the corporate giant
| that has a multimillion dollar legal budget and 30 full
| time lawyers and win... I know, I saw it in a movie once.
|
| The law books are filled with story after story of
| corporations doing fully illegal shit, then using money to
| delay it in court for decades... then laughably getting a
| tiny fine that represents less than 1% of the profits.
|
| TANSTAFL.
| kmeisthax wrote:
| Big Tech _is_ a government, we just call it a corporation.
| matwood wrote:
| And it's unelected.
| passwordoops wrote:
| That's what most people miss
| gspencley wrote:
| Government is force. It is laws, police, courts and the
| ability to seriously screw up your life if it chooses.
|
| A corporation might have "power" in an economic sense. It
| might have market significant presence in the marketplace.
| That presence might pressure or influence you in certain ways
| that you would prefer it not, such as the fact that all of
| your friends and family are customers/users of that faceless
| corporation.
|
| But what the corporation cannot do is put you in jail, seize
| your assets, prevent you from starting a business, dictate
| what you can or can't do with your home etc.
|
| Government is a necessary good. I'm no anarchist. But
| government is far more of a potential threat to liberty than
| the most "powerful" corporation could ever be.
| ohashi wrote:
| https://en.wikipedia.org/wiki/Banana_Wars#American_fruit_co
| m...
|
| You sure?
| em-bee wrote:
| _But what the corporation cannot do is put you in jail,
| seize your assets, prevent you from starting a business,
| dictate what you can or can 't do with your home etc._
|
| a corporation can "put me in jail" for copyright
| violations, accuse me of criminal conduct (happened in the
| UK, took them years to fix), seize my money (paypal, etc),
| destroy my business (amazon, google)...
|
| _But government is far more of a potential threat to
| liberty than the most "powerful" corporation could ever
| be._
|
| you (in the US) should vote for a better government. i'll
| trust my government to protect my liberty over most
| corporations any day.
| srackey wrote:
| No, they can appeal to the state to get _them_ to do it.
|
| But you still think parliament actually controls the
| government as opposed to Whitehall, so I understand why
| this may be a little intellectually challenging for you.
| lostmsu wrote:
| You can trivially choose not to associate with a corporation.
| You can't really do so with your government.
| vkou wrote:
| Trivially is doing a lot of lifting in that.
|
| By that same logic, you can 'trivially' influence a
| democratic government, you have no such control over a
| corporation.
| opo wrote:
| >...By that same logic, you can 'trivially' influence a
| democratic government, you have no such control over a
| corporation.
|
| That is a misrepresentation of the message you are
| replying too:
|
| >>You can trivially choose not to associate with a
| corporation. You can't really do so with your government.
|
| You won't get into legal trouble if you don't have a
| Facebook account, or a Twitter account, or use a search
| engine than Google, etc. Try to ignore the rules setup by
| your government and you will very quickly learn what
| having a monopoly of physical force within a given
| territory means. This is a huge difference between the
| two.
|
| As far as influencing a government or a corporation, I
| suspect (for example) that a letter to the CEO of even a
| large corporation will generally have more impact than a
| letter to the POTUS. (For example, customer emails
| forwarded from Bezos: https://www.quora.com/Whats-it-
| like-to-receive-a-question-ma...). This obviously will
| vary from company to company and maybe the President does
| something similar but my guess is maybe not.
| JumpCrisscross wrote:
| > _Governments will define the rules by which communication
| services (and social networks) should operate_
|
| As opposed to when they didn't?
| krapp wrote:
| The fix was in as soon as both parties came up with a rationale
| to support it and people openly started speaking about
| "algorithms" in the same spooky scary tones usually reserved
| for implied communist threats.
| amscanne wrote:
| Not at all. It's merely a question of whether social networks
| are shielded from liability for their recommendations,
| recognizing that what they choose to show you is a form of free
| expression that may have consequences -- not an attempt to
| control that expression.
| srackey wrote:
| Of course Comrade, there must be consequences for these firms
| pushing Counter-Revolutionary content. They can have free
| expression, but they must realize these algorithms are
| causing great harm to the Proletariat by platforming such
| content.
| jlarocco wrote:
| I feel like that's a poor interpretation of what happened.
| Corporations and businesses don't inherently have rights - they
| only have them because we've granted them certain rights, and
| we _already_ put limits on them. We don 't allow cigarette,
| alcohol, and marijuana advertising to children, for example.
| And now they'll have to face the consequences of sending stupid
| stuff like the "black out challenge" to children.
|
| It's one thing to say, "Some idiot posted this on our
| platform." It's another thing altogether to promote and endorse
| the post and send it out to everybody.
|
| Businesses should be held responsible for their actions.
| aidenn0 wrote:
| IANAL, but it seems to me that Facebook from 20ish years ago
| would likely be fine under this ruling; it just showed you
| stuff that people you have marked as friends post. However, if
| Facebook wants to specifically pick things to surface, that's
| where potential liability is involved.
|
| The alleged activity in this lawsuit was TikTok either knew or
| should have known that it was targeting content to minors that
| contained challenges that was likely to result in harm if
| repeated. That goes well beyond simple moderation, and is even
| something that various social media companies have argued in
| court is speech made by the companies.
| blackeyeblitzar wrote:
| I think it is broader than that. It's government control over
| the Internet. Sure we're talking about forced moderation (that
| is, censorship) and liability issues _right now_. But it
| ultimately normalizes a type of intervention and method of
| control that can extend much further. Just like we've seen the
| Patriot Act normalize many violations of civil liberties, this
| will go much further. I hope not, but I can't help but be
| cynical when I see the degree to which censorship by tech
| oligarchs has been accepted by society over the last 8 years.
| pelorat wrote:
| All large platforms already enact EU law over US law.
| Moderation is required of all online services which actively
| target EU users in order to shield themselves from liability
| for user generated content. The directive in question is
| 2000/31/EC and is 24 years old already. It's the precursor of
| the EU DSA and just like it, 2000/31/EC has extraterritorial
| reach.
| octopoc wrote:
| > In other words, the fundamental issue here is not really
| whether big tech platforms should be regulated as speakers, as
| that's a misconception of what they do. They don't speak, they
| are middlemen. And hopefully, we will follow the logic of Matey's
| opinion, and start to see the policy problem as what to do about
| that.
|
| This is a pretty good take, and it relies on pre-Internet legal
| concepts like distributor and producer. There's this idea that
| our legal / governmental structures are not designed to handle
| the Internet age and therefore need to be revamped, but this is a
| counterexample that is both relevant and significant.
| postalrat wrote:
| They are more than middlemen when they are very carefully
| choosing what content each person sees or doesn't see.
| falcolas wrote:
| > the internet grew tremendously, encompassing the kinds of
| activities that did not exist in 1996
|
| I guess that's one way to say that you never experienced the
| early internet. In three words: rotten dot com. Makes all the
| N-chans look like teenagers smoking on the corner, and Facebook
| et.al. look like toddlers in paddded cribs.
|
| This will frankly hurt any and all attempts to host any content
| online, and if anyone can survive it, it will be the biggest
| corporations alone. Section 230 also protected ISPs and hosting
| companies (linode, Hetzer, etc) after all.
|
| Their targeting may not be intentional, but will that matter? Are
| they willing to be jailed in a foreign country because of their
| perceived inaction?
| amanaplanacanal wrote:
| Jail? This was a civil suit, no criminal penalties apply, just
| monetary.
| falcolas wrote:
| Thanks to "Contempt of Court" anybody can go to jail, even if
| they're not found liable for the presented case.
|
| But more on point, we're discussing modification of how laws
| are interpreted. If someone can be held civilly liable, why
| _can 't_ they be held criminally liable if the "recommended"
| content breaks criminal laws (CSAM, for example)? There's
| nothing that prevents this interpretation from being
| considered in a criminal case.
| hn_acker wrote:
| Section 230 already doesn't apply to content that breaks
| federal criminal liability, so CSAM is already exempted.
| Certain third-party liability cases will still be protected
| by the First Amendment (no third-party liability without
| knowledge of CSAM, for example) but won't be dismissed
| early by Section 230.
| stackskipton wrote:
| This was purely about was "Is using algorithms made you a
| publisher?", this judge ruled yes and therefore, no Section
| 230.
|
| The Judge made no ruling on Section 230 protection for anyone
| who truly just hosts the content so ISPs/Hosting Companies
| should be fine.
| hello_computer wrote:
| This is a typical anglosphere move: Write another holy checklist
| (I mean, "Great Charter"), indoctrinate the plebes into thinking
| that they were made free because of it (they weren't), then as
| soon as one of the bulleted items leaves the regime's hiney
| exposed, have the "judges" conjure a new interpretation out of
| thin-air for as long as they think the threat persists.
|
| Whether it was Eugene Debs being thrown in the pokey, or every
| Japanese civilian on the west coast, or some harmless muslim
| suburbanite getting waterboarded, nothing ever changes. Wake me
| up when they actually do something to Facebook.
| WCSTombs wrote:
| From the article:
|
| > Because TikTok's "algorithm curates and recommends a tailored
| compilation of videos for a user's FYP based on a variety of
| factors, including the user's age and other demographics, online
| interactions, and other metadata," it becomes TikTok's own
| speech. And now TikTok has to answer for it in court. Basically,
| the court ruled that when a company is choosing what to show kids
| and elderly parents, and seeks to keep them addicted to sell more
| ads, they can't pretend it's everyone else's fault when the
| inevitable horrible thing happens.
|
| If that reading is correct, then Section 230 isn't nullified, but
| there's something that isn't shielded from liability any more,
| which IIUC is basically the "Recommended For You"-type content
| feed curation algorithms. But I haven't read the ruling itself,
| so it could potentially be more expansive than that.
|
| But assuming Matt Stoller's analysis there is accurate: frankly,
| I avoid those recommendation systems like the plague anyway, so
| if the platforms have to roll them back or at least be a little
| more thoughtful about how they're implemented, it's not
| necessarily a bad thing. There's no new liability for what users
| post (which is good overall IMO), but there can be liability _for
| the platform implementation itself_ in some cases. But I think we
| 'll have to see how this plays out.
| falcolas wrote:
| What is "recommended for you" if not a search result with no
| terms? From a practical point of view, unless you go the route
| of OnlyFans and disallow discovery on your own website, how do
| you allow any discovery if any form of algorithmic
| recommendation is outlawed?
| lcnPylGDnU4H9OF wrote:
| If it were the results of a search with no terms then it
| wouldn't be "for" a given subject. The "you" in "recommended
| for you" is the search term.
| falcolas wrote:
| That's just branding. It's called Home in Facebook and
| Instagram, and it's the exact same thing. It's a form of
| discovery that's tailored to the user, just like normal
| searches are (even on Google and Bing etc).
| lcnPylGDnU4H9OF wrote:
| Indeed, regardless of the branding for the feature, the
| service is making a decision about what to show a given
| user based on what the service knows about them. That is
| not a search result with no terms; the user is the term.
| falcolas wrote:
| Now for a followup question: How does _any_ website
| surface _any_ content when they 're liable for the
| content?
|
| When you can be held liable for surfacing the wrong (for
| unclear definitions of wrong) content to the wrong
| person, even Google could be held liable. Imagine if this
| child found a blackout video on the fifth page of their
| search results on "blackout". After all, YouTube hosted
| such videos as well.
| lcnPylGDnU4H9OF wrote:
| TikTok is not being held liable for hosting and serving
| the content. They're being held liable for recommending
| the content to a user with no other search context
| provided by said user. In this case, it is _because the
| visitor of the site was a young girl_ that they chose to
| surface this video and there was no other context. The
| girl did not search "blackout".
| falcolas wrote:
| > because the visitor of the site was a young girl that
| they chose to surface this video
|
| That's one hell of a specific accusation - that they
| looked at her age alone and determined solely based on
| that to show her that specific video?
|
| First off, at 10, she should have had an age-gated
| account that shows curated content specifically for
| children. There's nothing to indicate that her parents
| set up such an account for her.
|
| Also, it's well understood that Tiktok takes a user's
| previously watched videos into account when recommending
| videos. It can identify traits about the people based off
| that (and by personal experience, I can assert that it
| will lock down your account if it thinks you're a child),
| but they have no hard data on someone's age. Something
| about her video history triggered displaying this video
| (alongside thousands of other videos).
|
| Finally, no, the girl did not do a search (that we're
| aware of). But would the judge's opinion have changed? I
| don't believe so, based off of their logic. TikTok used
| an algorithm to recommend a video. TikTok uses that same
| algorithm with a filter to show search results.
|
| In any case, a tragedy happened. But putting the blame on
| TikTok seems more like an attack on TikTok and not an
| attempt to reign in the industry at large.
|
| Plus, at some point, we have to ask the question: where
| were the parents in all of this?
|
| Anyways.
| lcnPylGDnU4H9OF wrote:
| > That's one hell of a specific accusation - that they
| looked at her age alone and determined solely based on
| that to show her that specific video?
|
| I suppose I did not phrase that very carefully. What I
| meant is that they chose to surface the video because a
| specific young girl visited the site -- one who had a
| specific history of watched videos.
|
| > In any case, a tragedy happened. But putting the blame
| on TikTok seems more like an attack on TikTok and not an
| attempt to reign in the industry at large.
|
| It's always going to start with one case. This could be
| protectionism but it very well could instead be the start
| of reining in the industry.
| kaibee wrote:
| > Now for a followup question: How does any website
| surface any content when they're liable for the content?
|
| Chronological order, location based, posts-by-followed-
| accounts, etc. "Most liked", etc.
|
| Essentially by only using 'simple' algorithms.
| TylerE wrote:
| Is not the set of such things offered still editorial
| judgement?
|
| (And as an addendum, even if you think the answer to that
| is no, do you trust a judge who can probably barely work
| an iphone to come to the same conclusion, with your
| company in the crosshairs?)
| buildbot wrote:
| I'd say no, because they averages over the entire group.
| If you ranked based on say, most liked in your friends
| circle, or most liked by people with a high cosine
| similarity to your profile, then it starts to slide back
| into editorial judgment.
| skydhash wrote:
| Not really, as the variables comes from the content
| itself, not from the company intention.
|
| And for the addendum, that's why we have hearings and
| experts. No judge can be expected to be knowledgable
| about everything in life.
| itsdrewmiller wrote:
| This is only a circuit court ruling - there is a good chance it
| will be overturned by the supreme court. The cited supreme
| court case (Moody v. NetChoice) does not require
| personalization:
|
| > presenting a curated and "edited compilation of [third party]
| speech" is itself protected speech.
|
| This circuit court case mentions the personalization but
| doesn't limit its judgment based on its presence - almost any
| type of curation other than the kind of moderation explicitly
| exempted by the CDA could create liability, though in practice
| I don't think "sorting by upvotes with some decay" would end up
| qualifying.
| mikewarot wrote:
| >There is no way to run a targeted ad social media company with
| 40% margins if you have to make sure children aren't harmed by
| your product.
|
| So, we actually have to watch out for kids, and maybe only have a
| 25% profit margin? Oh, so terrible! /s
|
| I'm 100% against the political use of censorship, but 100% for
| the reasonable use of government to promote the general welfare,
| secure the blessings of liberty for ourselves, and our posterity.
| FireBeyond wrote:
| Right? I missed the part where a business is "entitled" to
| that. There was a really good quote I've never been able to
| find again, along the lines of "just because a business has
| always done things a certain way, doesn't mean they are exempt
| from changes".
| turol wrote:
| "There has grown up in the minds of certain groups in this
| country the notion that because a man or corporation has made
| a profit out of the public for a number of years, the
| government and the courts are charged with the duty of
| guaranteeing such profit in the future, even in the face of
| changing circumstances and contrary to the public interest.
| This strange doctrine is not supported by statute or common
| law. Neither individuals nor corporations have any right to
| come into court and ask that the clock of history be stopped,
| or turned back."
|
| Robert Heinlein in "Life-Line"
| FireBeyond wrote:
| Wow. Thank you. I saw this years ago, and despite my best
| efforts, I could never find it again! Thank you.
| mjevans wrote:
| """The Court held that a platform's algorithm that reflects
| "editorial judgments" about "compiling the third-party speech it
| wants in the way it wants" is the platform's own "expressive
| product" and is therefore protected by the First Amendment.
|
| Given the Supreme Court's observations that platforms engage in
| protected first-party speech under the First Amendment when they
| curate compilations of others' content via their expressive
| algorithms, it follows that doing so amounts to first-party
| speech under Section 230, too."""
|
| I've agreed for years. It's a choice in selection rather than a
| 'natural consequence' such as a chronological, threaded, or even
| '__end user__ upvoted /moderated' (outside the site's control)
| weighted sort.
| bentley wrote:
| If I as a forum administrator delete posts by obvious spambots,
| am I making an editorial judgment that makes me legally liable
| for every single post I don't delete?
|
| If my forum has a narrow scope (say, 4x4 offroading), and I
| delete a post that's obviously by a human but is seriously off-
| topic (say, U.S. politics), does _that_ make me legally liable
| for every single post I don't delete?
|
| What are the limits here, for those of us who unlike silicon
| valley corporations, don't have massive legal teams?
| WCSTombs wrote:
| > If my forum has a narrow scope (say, 4x4 offroading), and I
| delete a post that's obviously by a human but is seriously
| off-topic (say, U.S. politics), does that make me legally
| liable for every single post I don't delete?
|
| According to the article, probably not:
|
| > A platform is not liable for "any action voluntarily taken
| in good faith to restrict access to or availability of
| material that the provider or user considers to be obscene,
| lewd, lascivious, filthy, excessively violent, harassing, or
| otherwise objectionable."
|
| "Otherwise objectionable" looks like a catch-all phrase to
| allow content moderation generally, but I could be misreading
| it here.
| doe_eyes wrote:
| I think you're looking for the kind of precision that just
| doesn't exist in the legal system. It will almost certainly
| hinge on intent and the extent to which your actions actually
| stifle legitimate speech.
|
| I imagine that getting rid of spam wouldn't meet the bar, and
| neither would enforcing that conversations are on-topic. But
| if you're removing and demoting posts because they express
| views you disagree with, you're implicitly endorsing the
| opinions expressed in the posts you allow to stay up, and
| therefore are exercising editorial control.
|
| I think the lesson here is: either keep your communities
| small so that you can comfortably reason about the content
| that's up there, or don't play the thought police. The only
| weird aspect of this is that you have courts saying one
| thing, but then the government breathing down your neck and
| demanding that you go after misinformation.
| Sakos wrote:
| A lot of people seem to missing the part where if it ends
| up in court, you have to argue that what you removed was
| objectionable on the same level as the other named types of
| content and there will be a judge you'll need to convince
| that you didn't re-interpret the law to your benefit. This
| isn't like arguing on HN or social media, you being
| "clever" doesn't necessarily protect you from liability or
| consequences.
| _DeadFred_ wrote:
| Wouldn't it more be you are responsible for pinned posts at
| the top of thread lists? If you pin a thread promoting an
| unsafe onroad product, say telling people they should be
| replacing their steering with heim joints that aren't street
| legal, you could be liable. Whereas if you just left the
| thread among all the others you aren't. (Especially if the
| heim joints are sold by a forum sponsor or the forum has a
| special 'discount' code for the vendor).
| mathgradthrow wrote:
| You are simply not shielded from liability, I cannot imagine
| a scenario in which this moderation policy would result in
| significant liability. I'm sure someobe would be willing to
| sell you some insurance to that effect. I certainly would.
| Phrodo_00 wrote:
| I'm guessing you're not a lawyer, and I'm not either, so
| there might be some details that are not obvious about it,
| but the regulation draws the line at allowing you to do[1]:
|
| > any action voluntarily taken in good faith to restrict
| access to or availability of material that the provider or
| user considers to be obscene, lewd, lascivious, filthy,
| excessively violent, harassing, or otherwise objectionable,
| whether or not such material is constitutionally protected
|
| I think that allows your use case without liability.
|
| [1] https://www.law.cornell.edu/uscode/text/47/230
| kelnos wrote:
| Wow, "or otherwise objectionable" would seemingly give
| providers a loophole wide enough to drive a truck through.
| throwup238 wrote:
| It's not a loophole. That's the intended meaning,
| otherwise it would be a violation of freedom of
| association.
|
| That doesn't mean anyone is free to _promote_ content
| without liability, just that moderating by deleting
| content doesn 't make it an "expressive product."
| zerocrates wrote:
| That subsection of 230 is about protecting you from being
| sued _for_ moderating, like being sued by the people who
| posted the content you took down.
|
| The "my moderation makes me liable for everything I don't
| moderate" problem, that's what's addressed by the preceding
| section, the core of the law and the part that's most often
| at issue, which says that you can't be treated as
| publisher/speaker of anyone else's content.
| lesuorac wrote:
| > If my forum has a narrow scope (say, 4x4 offroading), and I
| delete a post that's obviously by a human but is seriously
| off-topic (say, U.S. politics), does that make me legally
| liable for every single post I don't delete?
|
| No.
|
| From the court of appeals [1], "We reach this conclusion
| specifically because TikTok's promotion of a Blackout
| Challenge video on Nylah's FYP was not contingent upon any
| specific user input. Had Nylah viewed a Blackout Challenge
| video through TikTok's search function, rather than through
| her FYP, then TikTok may be viewed more like a repository of
| third-party content than an affirmative promoter of such
| content."
|
| So, given (an assumption) that users on your forum choose
| some kind of "4x4 Topic" they're intending to navigate a
| repository of third-party content. If you curate that
| repository it's still a collection of third-party content and
| not your own speech.
|
| Now, if you were to have a landing page that showed "featured
| content" then that seems like you could get into trouble.
| Although one wonders what the difference is between
| navigating to a "4x4 Topic" or "Featured Content" since it's
| both a user-action.
|
| [1]: https://fingfx.thomsonreuters.com/gfx/legaldocs/mopaqabz
| ypa/...
| shagie wrote:
| > Now, if you were to have a landing page that showed
| "featured content" then that seems like you could get into
| trouble. Although one wonders what the difference is
| between navigating to a "4x4 Topic" or "Featured Content"
| since it's both a user-action.
|
| Consider HackerNews's functionality of flamewar
| suppression. https://news.ycombinator.com/item?id=39231821
|
| And this is the difference between
| https://news.ycombinator.com/news and
| https://news.ycombinator.com/newest (with showdead
| enabled).
| ApolloFortyNine wrote:
| >then TikTok may be viewed more like a repository of third-
| party content than an affirmative promoter of such
| content."
|
| "may"
|
| Basically until the next court case when someone learns
| that search is an algorithm too, and asks why the first
| result wasn't a warning.
|
| The real truth is, if this is allowed to stand, it will be
| selectively enforced at best. If it's low enough volume
| it'll just become a price of doing business, sometimes a
| judge has it out for you and you have to pay a fine, you
| just have to work it into the budget. Fine for big
| companies, game ender for small ones.
| supriyo-biswas wrote:
| What the other replies are not quite getting is that there
| can be other kinds of moderator actions that aren't acting on
| posts that are offtopic or offensive, but that do not meet
| the bar for the forum in question -- are they considered out
| of scope with this ruling?
|
| As an example, suppose on a HN thread about the Coq theorem
| prover, someone starts a discussion about the name, and it's
| highly upvoted but the moderators downrank that post manually
| to stimulate more productive discussions. Is this considered
| curation, and can this be no longer done given this ruling?
|
| It seems to me that this is indeed the case, but in case I'm
| mistaken I'd love to know.
| jay_kyburz wrote:
| Let me ask you a question in return.
|
| If you discovered a thread on the forum where a bunch of
| users were excitedly talking about doing something incredibly
| dangerous in their 4x4s, like getting high and trying some
| dangerous maneuver, would you let sit on your forum?
|
| How would you feel if somebody read about it on your forum
| and died trying to do it?
|
| Update: The point I'm trying to make is that _I_ wouldn't let
| this sit on my forum, so I don't think its unethical to ask
| others to remove it from their forums as well.
| nness wrote:
| > Because TikTok's "algorithm curates and recommends a tailored
| compilation of videos for a user's FYP based on a variety of
| factors, including the user's age and other demographics, online
| interactions, and other metadata," it becomes TikTok's own
| speech.
|
| This is fascinating and raises some interesting questions about
| where the liability starts and stops i.e. is "trending/top right
| now/posts from following" the same as a tailored algorithm per
| user? Does Amazon become culpable for products on their
| marketplace? etc.
|
| For good or for bad, this century's Silicon Valley was built on
| Section 230 and I don't foresee it disappearing any time soon. If
| anything, I suspect it will be supported by future/refined by
| legislation instead of removed. No one wants to be the person who
| legisliate away all online services...
| renewiltord wrote:
| If I spam filter comments am I subject to this? That is, the
| remaining comments are effectively like I was saying them?
| amanaplanacanal wrote:
| No. Section 230 protects you if you remove objectionable
| content. This is about deciding which content to show to each
| individual user. If all your users get the same content, you
| should be fine.
| renewiltord wrote:
| I see. Thanks!
|
| If they can customize the feed, does that make it their
| speech or my speech? Like if I give them a "subscribe to x
| communities" thing with "hide already visited". It'll be a
| different feed, and algorithmic (I suppose) but user
| controlled.
|
| I imagine if you have explicitly ask the user "what topics"
| and then use a program to determine which topic then it's a
| problem.
|
| I've got a WIP mastodon client that uses a llama3 to follow
| topics. I suppose that's not releasable.
| nsagent wrote:
| The current comments seem to say this is rings the death knell of
| social media and that this just leads to government censorship.
| I'm not so sure.
|
| I think the ultimate problem is that social media is not unbiased
| -- it curates what people are shown. In that role they are no
| longer an impartial party merely hosting content. It seems this
| ruling is saying that the curation being algorithmic does not
| absolve the companies from liability.
|
| In a very general sense, this ruling could be seen as a form of
| net neutrality. Currently social media platforms favor certain
| content, while down weighting others. Sure, it might be at a
| different level than peer agreements between ISPs and websites,
| but it amounts to a similar phenomenon when most people interact
| on social media through the feed.
|
| Honestly, I think I'd love to see what changes this ruling brings
| about. HN is quite literally the only social media site (loosely
| interpreted) I even have an account on anymore, mainly because of
| how truly awful all the sites have become. Maybe this will make
| social media more palatable again? Maybe not, but I'm inclined to
| see what shakes out.
| WCSTombs wrote:
| Yeah, pretty much. What's not clear to me though is how _non-
| targeted_ content curation, like simply "trending videos" or
| "related videos" on YouTube, is impacted. IMO that's not nearly
| as problematic and can be useful.
| whatshisface wrote:
| The diverse biases of newspapers or social media sites are
| preferable to the monolithic bias a legal solution will
| impress.
| nick238 wrote:
| So the solution is "more speech?" I don't know how that will
| unhook minors from the feedback loop of recommendation
| algorithms and their plastic brains. It's like saying 'we
| don't need to put laws in place to combat heroin use, those
| people could go enjoy a good book instead!'.
| nostrademons wrote:
| Yes, the solution is more speech. Teach your kids critical
| thinking or they will be fodder for somebody else who has
| it. That happens regardless of who's in charge, government
| or private companies. If you can't think for yourself and
| synthesize lots of disparate information, somebody else
| will do the thinking for you.
| kelnos wrote:
| Solution that require everyone to do a thing, and do it
| well, are doomed to fail.
|
| Yes, it would be great if parents would, universally,
| parent better, but getting all of them (or a large enough
| portion of them for it to make a difference) to do so is
| essentially impossible.
| nostrademons wrote:
| Government controls aren't a solution either though. The
| people with critical thinking skills, who can effectively
| tell others what to think, simply capture the government.
| Meet the new boss, same as the old boss.
| jrockway wrote:
| I agree with this. Kids are already subject to an agenda;
| for example, never once in my K-12 education did I learn
| anything about sex. This was because it was politically
| controversial at the time (and maybe it still is now), so
| my school district just avoided the issue entirely.
|
| I remember my mom being so mad about the curriculum in
| general that she ran for the school board and won. (I
| believe it was more of a math and science type thing. She
| was upset with how many coloring assignments I had.
| Frankly, I completely agreed with her then and I do now.)
| nostrademons wrote:
| I was lucky enough to go to a charter school where my
| teachers encouraged me to read books like "People's
| History of the U.S" and "Lies My Teacher Told Me". They
| have an agenda too, but understanding that there's a
| whole world of disagreement out there and that I should
| seek out multiple information sources and triangulate
| between them has been a huge superpower since. It's
| pretty shocking to understand the history of public
| education and realize that it wasn't created to benefit
| the student, but to benefit the future employers of those
| students.
| wvenable wrote:
| > Yes, the solution is more speech.
|
| I think we've reached the point now that there is more
| speech than any person can consume by a factor of a
| million. It now comes to down to picking what speech you
| want to hear. This is exactly what content algorithms are
| doing -> out of the millions of hours of speech produced
| in a day, it's giving you your 24 hours of it.
|
| Saying "teach your kids critical thinking" is _a_
| solution but it 's not _the_ solution. At some point, you
| have to _discover_ content out of those millions or hours
| a day. It 's impossible to do yourself -- it's always
| going to be curated.
|
| EDIT: To whomever downvoted this comment, you made my
| point. You should have replied instead.
| forgetfreeman wrote:
| K so several of the most well-funded tech companies on
| the planet sink literally billions of dollars into psyops
| research to reinforce addictive behavior and average
| parents are expected to successfully compete against it
| with...a lecture.
| chongli wrote:
| You're mistaken as to what this ruling is about.
| Ultimately, when it comes right down to it, the Third
| Circuit is saying this (directed at social media
| companies):
|
| "The speech is either wholly your speech or wholly
| someone else's. You can't have it both ways."
|
| Either they get to act as a common carrier (telephone
| companies are not liable for what you say on a phone call
| because it is wholly your own speech and they are merely
| carrying it) or they act as a publisher (liable for
| everything said on their platforms because they are
| exercising editorial control via algorithm). If this
| ruling is upheld by the Supreme Court, then they will
| have to choose:
|
| * Either claim the safe harbour protections afforded to
| common carriers and lose the ability to curate
| algorithmically
|
| or
|
| * Claim the free speech protections of the First
| Amendment but be liable for all content as it is their
| own speech.
| whatshisface wrote:
| Algorithmic libel detectors don't exist. The second
| option isn't possible. The result will be the separation
| of search and recommendation engines from social media
| platforms. Since there's effectively one search company
| in each national protectionist bloc, the result will be
| the creation of several new monopolies that hold the
| power to decide what news is front-page, and what is
| buried or practically unavailable. In the English-
| speaking world that right would go to Alphabet.
| chongli wrote:
| The second option isn't really meant for social media
| anyway. It's meant for traditional publishers such as
| newspapers.
|
| If this goes through I don't think it will be such a big
| boost for Google search as you suggest. For one thing, it
| has no effect on OpenAI and other LLM providers. That's a
| real problem for Google, as I see a long term trend away
| from traditional search and towards LLMs for getting
| questions answered, especially among young people. Also
| note that YouTube is social media and features a curation
| algorithm to deliver personalized content feeds.
|
| As for social media, I think we're better off without it!
| There's countless stories in the news about all the
| damage it's causing to society. I don't think we'll be
| able to roll all that back but I hope we'll be able to
| make things better.
| whatshisface wrote:
| If the ruling was upheld, Google wouldn't gain any new
| liability for putting a TikTok-like frontend on video
| search results; the only reason they're not doing it now
| is that all existing platforms (including YouTube) funnel
| all the recommendation clicks back into themselves. If
| YouTube had to stop offering recommendations, Google
| could take over their user experience and spin them off
| into a hosting company that derived its revenue from
| AdSense and its traffic from "Google Shorts."
|
| This ruling is not a ban on algorithms, it's a ban on the
| vertical integration between search or recommendation and
| hosting that today makes it possible for search engines
| other than Google to see traffic.
| Terr_ wrote:
| > Algorithmic libel detectors don't exist
|
| Automatic libel generators, on the other hand, are mych
| closer at hand. :p
|
| https://papers.ssrn.com/sol3/papers.cfm?abstract_id=45460
| 63
| bsder wrote:
| We have seen that adults can't seem to unhook from these
| dopamine delivery systems and you're expecting that
| children can do so?
|
| Sorry. That's simply disingenuous.
|
| Yes, children and especially teenagers do lots of things
| even though their parents try to prevent them from doing
| so. Even if children and teenagers still get them, we
| don't throw up our hands and sell them tobacco and
| alcohol anyway.
| aeternum wrote:
| Open-source the algorithm and have users choose. A
| marketplace is the best solution to most problems.
|
| It is pretty clear that china already forces a very
| different tiktok ranking algo for kids within the country
| vs outside the country. Forcing a single algo is pretty
| unamerican though and can easily be abused, let's instead
| open it up.
| kelnos wrote:
| 80% of users will leave things at the default setting, or
| "choose" whatever the first thing in the list is. They
| won't understand the options; they'll just want to see
| their news feed.
| aeternum wrote:
| I'm not so sure, the feed is quite important and users
| understand that. Look at how many people switched between
| X and Threads given their political view. People switched
| off Reddit or cancelled their FB account at times in the
| past also.
| kfajdsl wrote:
| I'm pretty sure going from X to Threads had very little
| to do with the feed algorithm for most people. It had
| everything to do with one platform being run by Musk and
| the other one not.
| mindslight wrote:
| "Open-source the algorithm" would be at best openwashing.
| The way to create the type of choice you're thinking is
| to force the unbundling of client software from hosting
| services.
| mathgradthrow wrote:
| Seems like the bias will be against manipulative algorithms.
| How does tiktok escape liability here? They give control of
| what is promoted to users to users.
| danaris wrote:
| Unfortunately, the biases of newspapers and social media
| sites are only diverse if they are not all under the strong
| influence of the wealthy.
|
| Even if they may have different skews on some issues, under a
| system where _all_ such entities are operated entirely for-
| profit, they will tend to converge on other issues, largely
| related to maintaining the rights of capital over labor and
| over government.
| kstrauser wrote:
| "Social media" is a broad brush though. I operate a Mastodon
| instance with a few thousand users. Our content timeline
| algorithm is "newest on top". Our moderation is heavily
| tailored to the users on my instance, and if a user says
| something grossly out of line with our general vibe, we'll
| remove them. That user is free to create an account on any
| other server who'll have them. We're not limiting their access
| to Mastodon. We're saying that we don't want their stuff on our
| own server.
|
| What are the legal ramifications for the many thousands of
| similar operators which are much closer in feel to a message
| board than to Facebook or Twitter? Does a server run by
| Republicans have to accept Communist Party USA members and
| their posts? Does a vegan instance have to allow beef farmers?
| A PlayStation fan server host pro-PC content?
| dudus wrote:
| You are directly responsible for everything they say and
| legally liable for any damages it may cause. Or not IANAL
| tboyd47 wrote:
| It all comes down to the assertion made by the author:
|
| > There is no way to run a targeted ad social media company
| with 40% margins if you have to make sure children aren't
| harmed by your product.
| philippejara wrote:
| I find it hard to see a way to run a targeted ad social media
| company at all if you have to make sure children aren't
| harmed by your product.
| stevenicr wrote:
| don't let children use? In TN it that will be illegal Jan 1
| - unless social media creates a method for parents to
| provide ID and opt out of them being blocked I think?
|
| Wouldn't that put the responsibility back on the parents?
|
| The state told you XYZ was bad for your kids and it's
| illegal for them to use, but then you bypassed that
| restriction and put the sugar back into their hands with an
| access-blocker-blocker..
|
| Random wondering
| ghaff wrote:
| Age limitations for things are pretty widespread. Of
| course, they can be bypassed to various degrees but,
| depending upon how draconian you want to be, you can
| presumably be seen as doing the best you reasonably can
| in a virtual world.
| aftbit wrote:
| What about 0% margins? Is there actually enough money in
| social media to pay for moderation even with no profit?
| Ajedi32 wrote:
| At the scale social media companies operate at, absolutely
| perfect moderation with zero false negatives is unavailable
| at any price. Even if they had a highly trained human
| expert manually review every single post (which is
| obviously way too expensive to be viable) some bad stuff
| would still get through due to mistakes or laziness.
| Without at least some form of Section 230, the internet as
| we know it cannot exist.
| hyeonwho4 wrote:
| I'm not sure about video, but we are no longer in an era when
| manual moderation is necessary. Certainly for text,
| moderation for child safety could be as easy as taking the
| written instructions currently given to human moderators and
| having an LLM interpreter (only needs to output a few bits of
| information) do the same job.
| tboyd47 wrote:
| That's great, but can your LLM remove everything harmful?
| If not, you're still liable for that one piece of content
| that it missed under this interpretation.
| ein0p wrote:
| These are some interesting mental gymnastics. Zuckerberg
| literally publicly admitted the other day that he was forced by
| the government to censor things without a legal basis. Musk
| disclosed a whole trove of emails about the same at Twitter.
| And you're still "not so sure"? What would it take for you to
| gain more certainty in such an outcome?
| vundercind wrote:
| Haven't looked into the Zuckerberg thing yet but everything
| I've seen of the "Twitter Files" has done more to convince me
| that nothing inappropriate or bad was happening, than that it
| was. And if those selective-releases were supposed to be the
| worst of it? Doubly so. Where's the bad bit (that doesn't
| immediately stop looking bad if you read the surrounding
| context whoever's saying it's bad left out)?
| ein0p wrote:
| Means you haven't really looked into the Twitter files.
| They were literally holding meetings with the government
| officials and were told what to censor and who to ban.
| That's plainly unconstitutional and heads should roll for
| this.
| kstrauser wrote:
| How did the government force Facebook to comply with
| their demands, as opposed to going along with them
| voluntarily?
| ein0p wrote:
| This is obviously not a real question, so instead of
| answering I propose we conduct a thought experiment. The
| year is 2028, and Zuck had a change of heart and fully
| switched sides. Facebook, Threads, and Instagram now
| block the news of Barron Trump's drug use, of his
| lavishly compensated board seat on the board of Russia's
| Gazprom, and bans the dominant electoral candidate off
| social media. In addition it allows the spread of a made
| up dossier (funded by the RNC) about Kamala Harris'
| embarrassing behavior with male escorts in China.
|
| What you should ask yourself is this: irrespective of
| whether compliance is voluntary or not, is political
| censorship on social media OK? And what kind of a logical
| knot one must contort one's mind into to suggest that
| this is the second coming of net neutrality? Personally I
| think the mere fact that the government is able to lean
| on a private company like that is damning AF.
| kstrauser wrote:
| You're grouping lots of unrelated things.
|
| All large sites have terms of service. If you violate
| them, you might be removed, even if you're " _the_
| dominant electoral candidate ". Remember, no one is above
| the law, or in this case, the rules that a site wishes to
| enforce.
|
| I'm not a fan of political censorship (unless that means
| enforcing the same ToS that everyone else is held to, in
| which case, go for it). Neither am I for the radical
| notion of legislation telling a private organization that
| they _must_ host content that they don 't wish to.
|
| This has zero to do with net neutrality. Nothing. Nada.
|
| Is there evidence that the government _leaned_ on a
| private company instead of meeting with them and asking
| them to do a thing? Did Facebook feel coerced into taking
| actions they wouldn 't have willingly done otherwise?
| oceanplexian wrote:
| > How did the government force Facebook to comply
|
| By asking.
|
| The government asking you to do something is like a
| dangerous schoolyard bully asking for your lunch money.
| Except the gov has the ability to kill, imprison, and
| destroy. Doesn't matter if you're an average Joe or a
| Zuckerberg.
| Terr_ wrote:
| So it's categorically impossible for the government to
| make _any_ non-coercive request for anything because it
| 's the government?
|
| I don't think that's settled law.
| nox101 wrote:
| I'm probably mis-understanding the implications but, IIUC, as
| it is, HN is moderated by dang (and others?) but still falls
| under 230 meaning HN is not responsible for what other users
| post here.
|
| With this ruling, HN is suddenly responsibly for all posts here
| specifically because of the moderation. So they have 2 options.
|
| (1) Stop the moderation so they can be safe under 230. Result,
| HN turns to 4chan.
|
| (2) enforce the moderation to a much higher degree by say,
| requiring non-anon accounts and TOS that make each poster
| responsible for their own content and/or manually approve every
| comment.
|
| I'm not even sure how you'd run a website with user content if
| you wanted to moderate that content and still avoid being
| liable for illegal content.
| jtriangle wrote:
| (1) 4chin is too dumb to use HN, and there's no image posting
| so, I doubt they'd even be interested in raiding us (2) I've
| never seen anything illegal here, I'm sure it happens, and it
| gets dealt with quickly enough that it's not really ever
| going to be a problem if things continue as they have been.
|
| They may lose 230 protection, sure, but probably not really a
| problem here. For Facebook et al, it's going to be an issue,
| no doubt. I suppose they could drop their algos and bring
| back the chronological feeds, but, my guess is that wouldn't
| be profitable given that ad-tech and content feeds are one in
| the same at this point.
|
| I'd also assume that "curation" is the sticking point here,
| if a platform can claim that they do not curate content, they
| probably keep 230 protection.
| wredue wrote:
| Certain boards most definitely raid various HN threads.
|
| Specifically, every political or science thread that makes
| it, is raided by 4chan. 4chan also regularly pushes
| anti/science and anti-education agenda threads to the top
| here, along with posts from various alt-right figures on
| occasion.
| jtriangle wrote:
| search: site:4chan.org news.ycombinator.com
|
| Seems pretty sparse to me, and from a casual perusal, I
| haven't seen any actual calls to raiding anything here,
| it's more of a reference where articles/posts have
| happened, and people talking about them.
|
| Remember, not everyone who you disagree with comes from
| 4chan, some of them probably work with you, you might
| even be friends with them, and they're perfectly
| serviceable people with lives, hopes, dreams, same as
| yours, they simply think differently than you.
| wredue wrote:
| lol dude. Nobody said that 4chan links are posted to HN,
| just that 4chan definitely raids HN.
|
| 4chan is very well known for brigading. It is also well
| known that using 4chan as well as a number of other
| locations, such as discord, to post links for brigades
| are an extremely common thing that the alt-right does to
| try to raise the "validity" of their statements.
|
| I also did not claim that only these opinions come from
| 4chan. Nice strawman bro.
|
| Also, my friends do not believe these things. I do not
| make a habit of being friends with people that believe in
| genociding others purely because of sexual orientation or
| identity.
| jtriangle wrote:
| Go ahead and type that search query into google and see
| what happens.
|
| Also the alt-right is a giant threat, if you categorize
| everyone right of you as alt-right, which seems to be the
| standard definition.
|
| That's not how I've chosen to live, and I find that it's
| peaceful to choose something more reasonable. The body
| politic is cancer on the individual, and on the list of
| things that are important in life, it's not truly
| important. With enough introspection you'll find that the
| tendency to latch onto politics, or anything politics-
| adjacent, comes from an overall lack of agency over the
| other aspects of life you truly care about. It's a
| vicious cycle. You have a finite amount of mental energy,
| and the more you spend on worthless things, the less you
| have to spend on things that matter, which leads to you
| latching further on to the worthless things, and having
| even less to spend on things that matter.
|
| It's a race to the bottom that has only losers. If you're
| looking for genocide, that's the genocide of the modern
| mind, and you're one foot in the grave already. You can
| choose to step out now and probably be ok, but it's going
| to be uncomfortable to do so.
|
| That's all not to say there aren't horrid, problem-
| causing individuals out in the world, there certainly
| are, it's just that the less you fixate on them, the more
| you realize that they're such an extreme minority that
| you feel silly fixating on them in the first place. That
| goes for anyone that anyone deems 'horrid and problem-
| causing' mind you, not just whatever idea you have of
| that class of person.
| Dr_Incelheimer wrote:
| >4chin is too dumb to use HN
|
| I don't frequent 4cuck, I use soyjak.party which I guess
| from your perspective is even worse, but there are of
| plenty of smart people on the 'cuck thoughbeit, like the
| gemmy /lit/ schizo. I think you would feel right at home in
| /sci/.
| supriyo-biswas wrote:
| Not sure about the downvotes on this comment; but what parent
| says has precedent in Cubby Inc. vs Compuserve Inc.[1] and
| this is one of the reasons Section 230 came about to be in
| the first place.
|
| HN is also heavily moderated with moderators actively trying
| to promote thoughtful comments over other, less thoughtful or
| incendiary contributions by downranking them (which is
| entirely separate from flagging or voting; and unlike what
| people like to believe, this place relies more on moderator
| actions as opposed to voting patterns to maintain its vibe.)
| I couldn't possibly see this working with the removal of
| Section 230.
|
| [1]
| https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.
| singleshot_ wrote:
| If I upvote something illegal, my liability was the same
| before, during, and after 230 exists, right?
| hn_acker wrote:
| Theoretically, your liability is the same because the
| First Amendment is what absolves you of liability for
| someone else's speech. Section 230 provides an avenue for
| early dismissal in such a case if you get sued; without
| Section 230, you'll risk having to fight the lawsuit on
| the merits, which will require spending more time (more
| fees).
| lcnPylGDnU4H9OF wrote:
| > With this ruling, HN is suddenly responsibly for all posts
| here specifically because of the moderation.
|
| I think this is a mistaken understanding of the ruling. In
| this case, TikTok decided, with no other context, to make a
| personalized recommendation to a user who visited their
| recommendation page. On HN, your front page is not different
| from my front page. (Indeed, there is no personalized
| recommendation page on HN, as far as I'm aware.)
| crummy wrote:
| > The Court held that a platform's algorithm that reflects
| "editorial judgments" about "compiling the third-party
| speech it wants in the way it wants" is the platform's own
| "expressive product" and is therefore protected by the
| First Amendment.
|
| I don't see how this is about personalization. HN has an
| algorithm that shows what it wants in the way it wants.
| lcnPylGDnU4H9OF wrote:
| From the article:
|
| > TikTok, Inc., via its algorithm, recommended and
| promoted videos posted by third parties to ten-year-old
| Nylah Anderson on her uniquely curated "For You Page."
| unyttigfjelltol wrote:
| That's the difference between the case and a monolithic
| electronic bulletin board like HN. HN follows an old-
| school BB model very close to the models that existed
| when Section 230 was written.
|
| Winding up in the same place as the defendant would
| require making a unique, dynamic, individualized BB for
| each user tailored to them based on pervasive online
| surveillance and the platform's own editorial "secret
| sauce."
| empressplay wrote:
| HN is _not_ a monolithic bulletin board -- the messages
| on a BBS were never (AFAIK) sorted by 'popularity' and
| users didn't generally have the power to demote or flag
| posts.
|
| Although HN's algorithm depends (mostly) on user input
| for how it presents the posts, it still favours some over
| others and still runs afoul here. You would need a
| literal 'most recent' chronological view and HN doesn't
| have that for comments. It probably should anyway!
|
| @dang We need the option to view comments
| chronologically, please
| philipkglass wrote:
| Writing @dang is a no-op. He'll respond if he sees the
| mention, but there's no alert sent to him. Email
| hn@ycombinator.com if you want to get his attention.
|
| That said, the feature you requested is _already_
| implemented but you have to know it is there. Dang
| mentioned it in a recent comment that I bookmarked:
| https://news.ycombinator.com/item?id=41230703
|
| To see comments on this story sorted newest-first, change
| the link to
|
| https://news.ycombinator.com/latest?id=41391868
|
| instead of
|
| https://news.ycombinator.com/item?id=41391868
| tsimionescu wrote:
| The HN team explicitly and manually manages the front
| page of HN, so I think it's completely unarguable that
| they would be held liable under this ruling if at least
| the front page contained links to articles that caused
| harm. They manually promote certain posts that they find
| particularly good, even if they didn't get a lot of
| votes, so this is even more direct than what TikTok did
| in this case.
| philistine wrote:
| The decision specifically mentions algorithmic
| recommandation as being speech, ergo the recommandation
| itself is the responsibility of the platform.
|
| Where is the algorithmic recommandation that differs per
| user on HN?
| skeptrune wrote:
| Key words are "editorial" and "secret sauce". Platforms
| should not be liable for dangerous content which slips
| through the cracks, but certainly should be when their
| user-personalized algorithms mess up. Can't have your
| cake and eat it to.
| wk_end wrote:
| It'd be interesting to know what constitutes an
| "algorithm". Does a message board sorting by "most
| recent" count as one?
| saratogacx wrote:
| > algorithm that reflects "editorial judgments"
|
| I don't think timestamps are, in any way, construed
| editorial judgement. They are a content agnostic related
| attribute.
| srj wrote:
| What about filtering spam? Or showing the local weather /
| news headlines?
| bitshiftfaced wrote:
| Or ordering posts by up votes/down votes, or some
| combination of that with the age of the post.
| klik99 wrote:
| Specifically NetChoice argued that personalized feeds
| based on user data were protected due to first person
| speech. This went to supreme court and supreme court
| agreed. Now precedent is set by the highest court that
| those feeds are "expressive product". It doesn't make
| sense, but that's how the law works - by trying to define
| as best as possible the things in gray areas.
|
| And they probably didn't think through how this
| particular argument could affect other areas of their
| business.
| zerocrates wrote:
| So, yes, the TikTok FYP is different from a forum with
| moderation.
|
| But the basis of this ruling is basically "well the
| _Moody_ case says that curation
| /moderation/suggestion/whatever is First Amendment
| protected speech, therefore that's _your_ speech and not
| somebody else 's and so 230 doesn't apply and you can be
| liable for it." That rationale extends to basically any
| form of moderation or selection, personalized or not, and
| would blow a big hole in 230's protections.
|
| Given generalized anti-Big-Tech sentiment on both ends of
| the political spectrum, I could see something that
| claimed to carve out just algorithmic
| personalization/suggestion from protection meeting with
| success, either out of the courts or Congress, but it
| really doesn't match the current law.
| pessimizer wrote:
| Doesn't seem to have anything to do with personalization
| to me, either. It's about "editorial judgement," and an
| algorithm isn't necessarily a get out of jail free card
| unless the algorithm is completely transparent and user-
| adjustable.
|
| I even think it would count if the only moderation you
| did on your Lionel model train site was to make sure that
| most of the conversation was about Lionel model trains,
| and that they be treated in a positive (or at least
| neutral) manner. That degree of moderation, for that
| purpose, would make you liable if you left illegal or
| tortious content up i.e. if you moderate, you're a
| moderator, and your first duty is legal.
|
| If you're just a dumb pipe, however, you're a dumb pipe
| and get section 230.
|
| I wonder how this works with recommendation algorithms,
| though, seeing as they're also trade secrets. Even when
| they're not dark and predatory (advertising related.) If
| one has a recommendation algo that makes better e.g. song
| recommendations, you don't want to have to share it.
| Would it be something you'd have to privately reveal to a
| government agency (like having to reveal the composition
| of your fracking fluid to the EPA, as an example), and
| they would judge whether or not it was "editorial" or
| not?
|
| [edit: that being said, it would probably be very hard to
| break the law with a song recommendation algorithm. But
| I'm sure you could run afoul of some financial law still
| on the books about payola, etc.]
| lesuorac wrote:
| Per the court of appeals, TikTok is not in trouble for
| showing a blackout challenge video. TikTok is in trouble
| for not censoring them after knowing they were causing
| harm.
|
| > "What does all this mean for Anderson's claims? Well, SS
| 230(c)(1)'s preemption of traditional publisher liability
| precludes Anderson from holding TikTok liable for the
| Blackout Challenge videos' mere presence on TikTok's
| platform. A conclusion Anderson's counsel all but concedes.
| But SS 230(c)(1) does not preempt distributor liability, so
| Anderson's claims seeking to hold TikTok liable for
| continuing to host the Blackout Challenge videos knowing
| they were causing the death of children can proceed."
|
| As-in, Dang would be liable if say somebody started a
| blackout challenge post on HN and he didn't start censoring
| all of them once news reports of programmers dieing broke
| out.
|
| https://fingfx.thomsonreuters.com/gfx/legaldocs/mopaqabzypa
| /...
| wahnfrieden wrote:
| What constitutes "censoring all of them"
| altairprime wrote:
| Trying to define "all" is an impossibility; but, by
| virtue of having taken no action whatsoever, answering
| that question is irrelevant in the context of this
| particular judgment: Tiktok took no action, so the
| definition of "all" is irrelevant. See also for example:
| https://news.ycombinator.com/item?id=41393921
|
| In general, judges will be ultimately responsible for
| evaluating whether "any", "sufficient", "appropriate",
| etc. actions were taken in each future case judgement
| they make. As with all things legalese, it's impossible
| to define with certainty a specific _degree_ of action
| that is the uniform boundary of acceptable; but, as
| evident here, "none" is no longer permissible in that
| set.
|
| (I am not your lawyer, this is not legal advice.)
| mattigames wrote:
| Any good will attempt at censoring would have been as a
| reasonable defense even if technically they don't censor
| 100% of them, such as blocking videos with the word
| "blackout" on their title or manually approving videos
| with such thing, but they did nothing instead.
| sangnoir wrote:
| > TikTok is in trouble for not censoring them after
| knowing they were causing harm.
|
| This has interesting higher-order effects on free speech.
| Let's apply the same ruling to vaccine misinformation, or
| the ability to organize protests on social media (which
| opponents will probably call riots if there are any
| injuries)
| lesuorac wrote:
| Uh yeah, the court of appeals has reached an interesting
| decision.
|
| But I mean what do you expect from a group of judges that
| themselves have written they're moving away from
| precedent?
| sangnoir wrote:
| I don't doubt the same court relishes the thought of
| deciding what "harm" is on a case-by-case basis. The
| continued politicization of the courts will not end well
| for a society that nominally believes in the rule of law.
| Some quarters have been agitating for removing SS230 safe
| harbor protections (or repealing it entirely), and the
| courts have delivered.
| mattigames wrote:
| The ingenuity of kids to believe and be easily influenced
| by what they see online had a big role in this ruling,
| disregarding that is a huge disservice to a productive
| discussion.
| whartung wrote:
| Does TikTok have to know that "as a category blackout
| videos are bad" or that "this specific video is bad".
|
| Does TikTok have preempt this category of videos in the
| future or simply respond promptly when notified such a
| video is posted to their system?
| jay_kyburz wrote:
| Are you asking about the law, or are you asking our
| opinion?
|
| Do you think its reasonable for social media to send
| videos to people without considering how harmful they
| are?
|
| Do you even think its reasonable for search engine to
| respond to a specific request for this information?
| oceanplexian wrote:
| Did some hands come out of the screen, pull a rope out
| then choke someone? Platforms shouldn't be held
| responsible when 1 out of a million users wins a Darwin
| award.
| autoexec wrote:
| I think it's a very different conversation when you're
| talking about social media sites pushing content they
| know is harmful onto people who they know are literal
| children.
| autoexec wrote:
| Personally, I wouldn't want search engines censoring
| results for things explicitly searched for, but I'd still
| expect that social media should be responsible for
| harmful content they push onto users who never asked for
| it in the first place. Push vs Pull is an important
| distinction that should be considered.
| Manuel_D wrote:
| But something like Reddit would be held liable for showing
| posts, then. Because you get shown different results
| depending on the subreddits you subscribe to, your browsing
| patterns, what you've upvoted in the past, and more. Pretty
| much any recommendation engine is a no-go of this ruling
| becomes precedence.
| TheGlav wrote:
| From my reading, if the site only shows you based on your
| selections, then it wouldn't be liable. For example, if
| someone else with the exact same selections gets the same
| results, then that's not their platform deciding what to
| show.
|
| If it does any customization based on what it knows about
| you, or what it tries to sell you because you are you,
| then it would be liable.
|
| Yep., recommendation engines would have to be very
| carefully tuned, or you risk becoming liable.
| Recommending only curated content would be a way to
| protect yourself, but that costs money that companies
| don't have to pay today. It would be doable.
| djhn wrote:
| It could be difficult to draw the line. I assume TikTok's
| suggestions are deterministic enough that an identical
| user would see the same things - it's just incredibly
| unlikely to be identical at the level of granularity that
| TikTok is able to measure due to the type of content and
| types of interactions the platform has.
| Manuel_D wrote:
| > For example, if someone else with the exact same
| selections gets the same results, then that's not their
| platform deciding what to show.
|
| This could very well be true for TikTok. Of course
| "selection" would include liked videos, how long you
| spend watching each video, and how many videos you have
| posted
|
| And on the flip side a button that brings you to a random
| video would supply different content to users regardless
| of "selections".
| lesuorac wrote:
| TBH, Reddit really shouldn't have 230 protection anyways.
|
| You can't be licensing user content to AI as it's not
| yours. You also can't be undeleting posts people make
| (otherwise it's really reddit's posts and not theirs).
|
| When you start treating user data as your own; it should
| become your own and that erodes 230.
| autoexec wrote:
| > You also can't be undeleting posts people make
|
| undeleting is bad enough, but they've edited the content
| of user's comments too.
| raydev wrote:
| It belongs to reddit, the user handed over the content
| willingly.
| Manuel_D wrote:
| > You can't be licensing user content to AI as it's not
| yours.
|
| It is theirs. Users agreed to grant Reddit a license to
| use the content when they accepted the terms of service.
| juliangmp wrote:
| >Pretty much any recommendation engine is a no-go of this
| ruling becomes precedence.
|
| That kind of sounds... great? The only instance where I
| genuinely like to have a recommendation engine around is
| music steaming. Like yeah sometimes it does recommend
| great stuff. But anywhere else? No thank you
| spamizbad wrote:
| I feel like the end result of path #1 is that your site just
| becomes overrun with spams and scams. See also: mail,
| telephones.
| aftbit wrote:
| Yeah, no moderation leads to spams, scams, rampant hate,
| and CSAM. I spent all of an hour on Voat when it was in its
| heyday and it mostly literal Nazis calling for the
| extermination of undesirables. The normies just stayed on
| moderated Reddit.
| redeeman wrote:
| voat wasnt exactly a single place, any more than reddit
| is
| snapcaster wrote:
| Were there non KKK/nazi/qanon whatever subvoats (or
| whatever they call them?) the one time i visited the site
| every single post on the frontpage was alt right nonsense
| tzs wrote:
| Yes. There were a ton of them for various categories of
| sex drawings, mostly in the style common in Japanese
| comics and cartoons.
| autoexec wrote:
| It was the people who were chased out of other websites
| that drove much of their traffic so it's no surprise that
| their content got the front page. It's a shame that they
| scared so many other people away and downvoted other
| perspectives because it made diversity difficult.
| wredue wrote:
| Nah. HN is not the same as these others.
|
| TikTok. Facebook. Twitter. YouTube.
|
| All of these have their algorithms specifically curated to
| try to keep you angry. YouTube outright ignores your blocks
| every couple months, and no matter how many people dropping
| n-bombs you report and block, it never endingly pushes more
| and more.
|
| These company know that their algorithms are harmful and they
| push them anyway. They absolutely should have liability for
| what their algorithm pushes.
| akira2501 wrote:
| There's moderation to manage disruption to a service. There's
| editorial control to manage the actual content on a service.
|
| HN engages in the former but not the latter. The big three
| engage in the latter.
| closeparen wrote:
| HN engages in the latter. For example, user votes are
| weighted based on their alignment with the moderation
| team's view of good content.
| akira2501 wrote:
| I don't understand your explanation. Do you mean just
| voting itself? That's not controlled or managed by HN.
| That's just more "user generated content." That posts get
| hidden or flagged due to thresholding is non-
| discriminatory and not _individually_ controlled by the
| staff here.
|
| Or.. are you suggesting there's more to how this works?
| Is dang watching votes and then making decisions based on
| those votes?
|
| "Editorial control" is more of a term of art and has a
| narrower definition then you're allowing for.
| empressplay wrote:
| There's things like 'second chance' where the editorial
| team can re-up posts they feel didn't get a fair shake
| the first time around, sometimes if a post gets too 'hot'
| they will cool it down -- all of this is understandable
| but unfortunately does mean they are actively moderating
| content and thus are responsible for all of it.
| krapp wrote:
| Dang has been open about voting being only one part of
| the way HN works, and that manual moderator intervention
| does occur. They will downweigh the votes of "problem"
| accounts, manually adjust the order of the frontpage, and
| do whatever they feel necessary to maintain a high signal
| to noise ratio.
| tsimionescu wrote:
| The HN moderation team makes a lot of editorial choices,
| which is what gives HN its specific character. For
| example, highly politically charged posts are manually
| moderated and kept off the main page regardless of votes,
| with limited exceptions entirely up to the judgement of
| the editors. For example, content about the wars in
| Ukraine and Israel is not allowed on the mainpage except
| on rare occasions. dang has talked a lot about the
| reasoning behind this.
|
| The same applies to comments on HN. Comments are not
| moderated based purely on legal or certain general "good
| manners" grounds, they are moderated to keep a certain
| kind of discourse level. For example, shallow jokes or
| meme comments are not generally allowed on HN. Comments
| that start discussing controversial topics, even if
| civil, are also discouraged when they are not on-topic.
|
| Overall, HN is very much curated in the direction of a
| newspaper "letter to the editor" section, then more
| algorithmic and hands-off like the Facebook wall or
| TikTok feed. So there is no doubt whatsoever, I believe,
| that HN would be considered responsible for user content
| (and is, in fact, already pretty good at policing that in
| my experience, at least on the front page).
| zahlman wrote:
| > The HN moderation team makes a lot of editorial
| choices, which is what gives HN its specific character.
| For example, highly politically charged posts are
| manually moderated and kept off the main page regardless
| of votes, with limited exceptions entirely up to the
| judgement of the editors. For example, content about the
| wars in Ukraine and Israel is not allowed on the mainpage
| except on rare occasions. dang has talked a lot about the
| reasoning behind this.
|
| This is meaningfully different in kind from only
| excluding posts that reflect _certain perspectives_ on
| such a conflict. Maintaining topicality is not imposing a
| bias.
| tboyd47 wrote:
| Under Judge Matey's interpretation of Section 230, I don't
| even think option 1 would remain on the table. He includes
| every act except mere "hosting" as part of publisher
| liability.
| itishappy wrote:
| 4chan is actually moderated too.
| pointnatu wrote:
| Freedom of speech, not reach of their personal curation
| preferences, narrative shaping due to confirmation bias and
| survivorship bias. Tech is in the put them on scales to
| increase their signal, decrease others based upon some hokey
| story of academic and free market genius.
|
| The pro-science crowd (which includes me fwiw) seems
| incapable of providing a proof any given scientist is _that_
| important. Same old social politics norms inflate some
| deflate others and we confirm our survival means we special.
| Ones education is vacuous prestige given physics applies
| equally; oh you did the math! Yeah I just tell the computer
| to do it. Oh you memorized the circumlocutions and dialectic
| of some long dead physicist. Outstanding.
|
| There's a lot of ego driven banal classist nonsense in tech
| and science. At the end of the day just meat suits with the
| same general human condition.
| coryrc wrote:
| 2) Require confirmation you are a real person (check ID) and
| attach accounts per person. The commercial Internet has to
| follow the laws they're currently ignoring and the non-
| commercial Internet can do what they choose (because of being
| untraceable).
| ryandrake wrote:
| I look at forums and social media as analogous to writing a
| "Letter to the Editor" to a newspaper:
|
| In the newspaper case, you write your post, send it to the
| newspaper, and some editor at the newspaper decides whether or
| not to publish it.
|
| In Social Media, the same thing happens, but it's just super
| fast and algorithmic: You write your post, send it to the
| Social Media site (or forum), an algorithm (or moderator) at
| the Social Media site decides whether or not to publish it.
|
| I feel like it's reasonable to interpret this kind of editorial
| selection as "promotion" and "recommendation" of that comment,
| particularly if the social media company's algorithm
| deliberately places that content into someone's feed.
| jay_kyburz wrote:
| I agree.
|
| I think if social media companies relayed communication
| between it's users with no moderation at all, then they
| should be entitled to carrier protections.
|
| As soon as they start making any moderation decisions, they
| are implicitly endorsing all other content, and should
| therefore be held responsible for it.
|
| There are two things social media can do. Firstly, they
| should accurately identify its users before allowing them to
| post, so they can counter sue that person if post harms them,
| and secondly, they can moderate every post.
|
| Everybody says this will kill social media as we know it, but
| I say the world will be a better place as a result.
| immibis wrote:
| Refusal to moderate, though, is also a bias. It produces a bias
| where the actors who post the most have their posts seen the
| most. Usually these posts are Nigerian princes, Viagra vendors,
| and the like. Nowadays they'll also include massive quantities
| of LLM-generated cryptofascist propaganda (but not
| cryptomarxist propaganda because cryptomarxists are incompetent
| at propaganda). If you moderate the spam, you're biasing the
| site away from these groups.
| itsdrewmiller wrote:
| You can't just pick anything and call it a "bias" -
| absolutely unmoderated content may not (will not) represent
| the median viewpoint, but it's not the hosting provider
| "bias" doing so. Moderating spam is also not "bias" as long
| as you're applying content-neutral rules for how you do that.
| AnthonyMouse wrote:
| > I think the ultimate problem is that social media is not
| unbiased -- it curates what people are shown.
|
| This is literally the purpose of Section 230. It's Section 230
| of the _Communications Decency Act_. The purpose was to change
| the law so platforms could moderate content without incurring
| liability, because the law was previously that doing any
| moderation made you liable for whatever users posted, and you
| don 't want a world where removing/downranking spam or
| pornography or trolling causes you to get sued for unrelated
| things you didn't remove.
| samrus wrote:
| Yeah but they're not just removing spam and porn. They're
| picking out things that makes them money even if it harms
| people. That was never in the spirit of the law
| zahlman wrote:
| > The purpose was to change the law so platforms could
| moderate content
|
| What part of deliberately showing political content to people
| algorithmically expected to agree with it, constitutes
| "moderation"?
|
| What part of deliberately showing political content to people
| algorithmically expected to _disagree_ with it, constitutes
| "moderation"?
|
| What part of deliberately suppressing or promoting political
| content based on the opinions of those in charge of the
| platform, constitutes "moderation"?
|
| What part of suppressing "misinformation" on the basis of
| what's said in "reliable sources" (rather than any
| independent investigation - but really the point would still
| stand), constitutes "moderation"?
|
| What part of favouring content from already popular content
| creators because it brings in more ad revenue, constitutes
| "moderation"?
|
| What part of algorithmically associating content with ads for
| specific products or services, constitutes "moderation"?
| tomrod wrote:
| Prosaically, all of your examples are moderation. And as a
| private space that a user must choose to access, I'd argue
| that's great.
| crooked-v wrote:
| > What part of deliberately showing political content to
| people algorithmically expected to agree with it,
| constitutes "moderation"?
|
| Well, maybe it's just me, but only showing political
| content that doesn't include "kill all the (insert minority
| here)", and expecting users to not object to that standard,
| is a pretty typical aspect of moderation for discussion
| sites.
|
| > What part of deliberately suppressing or promoting
| political content based on the opinions of those in charge
| of the platform, constitutes "moderation"?
|
| Again, deliberately suppressing support for literal and
| obvious facism, based on the opinions of those in charge of
| the platform, is a kind of moderation so typical that it's
| noteworthy when it doesn't happen (e.g. Stormfront).
|
| > What part of suppressing "misinformation" on the basis of
| what's said in "reliable sources" (rather than any
| independent investigation - but really the point would
| still stand), constitutes "moderation"?
|
| Literally all of Wikipedia, where the whole point of the
| reliable sources policy is that the people running it don't
| have to be experts to have a decently objective standard
| for what can be published.
| bsder wrote:
| > I think the ultimate problem is that social media is not
| unbiased -- it curates what people are shown.
|
| It is not only _biased_ but also _biased for maximum
| engagement_.
|
| People come to these services for various reasons but then have
| this _specifically biased_ stuff jammed down their throats in a
| way to induce _specific behavior_.
|
| I personally don't understand why we don't hammer these social
| media sites for conducting psychological experiments without
| consent.
| shadowgovt wrote:
| HN also has an algorithm.
|
| I'll have to read the third circuit's ruling in detail to
| figure out whether they are trying to draw a line in the Sand
| on whether an algorithm satisfies the requirements for section
| 230 protection or falls outside of it. If that's what they're
| doing, I wouldn't assume a priori that a site like Hacker News
| won't also fall afoul of the law.
| EasyMark wrote:
| I think HN sees this as just more activist judges trying to
| overrule the will of the people (via Congress). This judge is
| attempting to interject his opinion on the way things should be
| vs what a law passed by the highest legislative body in the
| nation as if that doesn't count. He is also doing it on very
| shaky ground, but I wouldn't expect anything less of the 3rd
| circuit (much like the 5th)
| smrtinsert wrote:
| This is a much needed regulation. If anything it will probably
| spur innovation to solve safety in algorithms.
|
| I think of this more along the lines of preventing a factoring
| from polluting a water supply or requiring a bank to have
| minimum reserves.
| deafpolygon wrote:
| Section 230 is alive and well, and this ruling won't impact it.
| What will change is that US social media firms will move away
| from certain types of algorithmic recommendations. Tiktok is
| owned by Bytedance which is a Chinese firm, so in the long run -
| no real impact.
| seydor wrote:
| The ruling itself says that this is not about 230, it's about
| TikTok's curation and collation of the specific videos. TikTok is
| not held liable for the user content but for the part that they
| do their 'for you' section. I guess it makes sense, manipulating
| people is not OK whether it's for political purposes as facebook
| and twitter do, or whatever. So 230 is not over
|
| It would be nice to see those 'For you' and youtube's
| recomendations gone. Chronological timelines are the best , and
| will bring back some sanity. Don't like it? don't follow it
|
| > Accordingly, TikTok's algorithm, which recommended the Blackout
| Challenge to Nylah on her FYP, was TikTok's own "expressive
| activity," id., and thus its first-party speech.
|
| >
|
| > Section 230 immunizes only information "provided by another[,]"
| 47 U.S.C. SS 230(c)(1), and here, because the information that
| forms the basis of Anderson's lawsuit--i.e., TikTok's
| recommendations via its FYP algorithm--is TikTok's own expressive
| activity, SS 230 does not bar Anderson's claims.
| falcolas wrote:
| > Don't like it? don't follow it
|
| How did you find _it_ in the first place? A search? Without any
| kind of filtering (that 's an algorithm that could be used to
| manipulate people), all you'll see is pages and pages of SEO.
|
| Opening up liability like this is a quagmire that's not going
| to do good things for the internet.
| pixl97 wrote:
| >not going to do good things for the internet.
|
| Not sure if you've noticed, but the internet seemingly ran
| out of good things quite some time back.
| falcolas wrote:
| Irrelevant and untrue.
|
| For example, just today there was a highly entertaining and
| interesting article about how to replace a tablet-based
| thermostat. And it was posted on the internet, and surfaced
| via an algorithm on Hacker News.
| rtkwe wrote:
| The question though is how do you do a useful search
| without having some kind of algorithmic answer to what you
| think the user will like. Explicit user or exact match
| strings are simple but if I search "cats" looking for cat
| videos how does that list get presented without being a
| curated list made by the company?
| seydor wrote:
| retweets/sharing. that's how it used to be
| falcolas wrote:
| How did they find it? How did you see the tweet? How did
| the shared link show up in any of your pages?
|
| Also, lists of content (or blind links to random pages -
| web rings) have been a thing since well before Twitter or
| Dig.
| skydhash wrote:
| How do rumors and news propagates? And we still consider
| that the person sharing it with us is partially
| responsible (especially if it's fake).
| jimbob45 wrote:
| _Without any kind of filtering (that 's an algorithm that
| could be used to manipulate people)_
|
| Do you genuinely believe a judge is going to rule that a
| Boyer-Moore implementation is fundamentally biased? It seems
| likely that sticking with standard string matching will
| remain safe.
| aiauthoritydev wrote:
| > Chronological timelines are the best , and will bring back
| some sanity. Don't like it? don't follow it
|
| You realize that there is immense arrogance in this statement
| where you have decided that something is good for me ? I am
| totally fine with youtube's recommendations or even Tiktok's
| algorithms that according to you "manipulate" me.
| seydor wrote:
| You can have them, but they have legal consequences for the
| owner.
| cvalka wrote:
| How can they have them if they are prohibited?
| skydhash wrote:
| They're not prohibited. They're just liable for it, just
| like manufacturers are liable for defective products that
| endangers people.
| WalterBright wrote:
| > manipulating people is not OK whether it's for political
| purposes as facebook and twitter do
|
| Not to mention CNN, MSNBC, the New York Times, NPR, etc.
| seydor wrote:
| Those are subject to legal liability for the content they
| produce.
| dvngnt_ wrote:
| how does that work for something like tiktok. Chronological
| doesn't have much value if you're trying to discover
| interesting content relevant to your interest.
| Xcelerate wrote:
| I'm not at all opposed to implementing _new_ laws that society
| believes will reduce harm to online users (particularly
| children).
|
| However, if Section 230 is on its way out, won't this just
| benefit the largest tech companies that already have massive
| legal resources and the ability to afford ML-based or manual
| content moderation? The barriers to entry into the market for
| startups will become insurmountable. Perhaps I'm missing
| something here, but it sounds like the existing companies
| essentially got a free pass with regard to liability of user-
| provided content and had plenty of time to grow, and now the
| government is pulling the ladder up after them.
| tboyd47 wrote:
| The assertion made by the author is that the way these
| companies grew is only sustainable in the current legal
| environment. So the advantage they have right now by being
| bigger is nullified.
| xboxnolifes wrote:
| Yes, the way they _grew_ is only sustainable from the
| current. What about not growing, but maintaining?
| lelandbatey wrote:
| The parent said "grew", but I think a closer reading of the
| article indicates a more robust idea that tboyd47 merely
| misrepresented. A better sentence is potentially:
|
| _are able to profit to the tune of a 40% margin on
| advertising revenue_
|
| With that, they're saying that they're only going to be
| able to profit this much in this current regulatory
| environment. If that goes away, so too does much of their
| margin, potentially all of it. That's a big blow no matter
| the size, though Facebook may weather it better than
| smaller competitors.
| 2OEH8eoCRo0 wrote:
| > won't this just benefit the largest tech companies
|
| I'd wager the bigger you are the harder it gets. How would they
| fend off tens of thousands of simultaneous lawsuits?
| oldgregg wrote:
| Insane reframing. Big tech and politicians are pushing this,
| pulling the ladder up behind them-- X and new decentralized
| networks are a threat to their hegemony and this is who they are
| going after. Startups will not be able to afford whatever
| bullshit regulatory framework they force feed us. How about they
| mandate any social network over 10M MAU has to publish their
| content algorithms.. ha!
| jrockway wrote:
| I'm not sure that Big Tech is over. Media companies have had a
| viable business forever. What happens here is that instead of
| going to social media and hearing about how to fight insurance
| companies, you'll just get NFL Wednesday Night Football Presented
| By TikTok.
| tomcam wrote:
| Have to assume dang is moderating his exhausted butt off, because
| the discussion on this page is vibrant and courteous. Thanks all!
| itsdrewmiller wrote:
| I agree, and for that reason I will be suing Hacker News in
| Pennsylvania, New Jersey, Delaware, or the Virgin Islands.
| delichon wrote:
| TikTok, Inc., via its algorithm, recommended and promoted videos
| posted by third parties to ten-year-old Nylah Anderson on her
| uniquely curated "For You Page." One video depicted the "Blackout
| Challenge," which encourages viewers to record themselves
| engaging in acts of self-asphyxiation. After watching the video,
| Nylah attempted the conduct depicted in the challenge and
| unintentionally hanged herself. --
| https://cases.justia.com/federal/appellate-
| courts/ca3/22-3061/22-3061-2024-08-27.pdf?ts=1724792413
|
| An algorithm accidentally enticed a child to hang herself. I've
| got code running on dozens of websites that recommends articles
| to read based on user demographics. There's nothing in that code
| that would or could prevent an article about self-asphyxiation
| being recommended to a child. It just depends on the clients that
| use the software not posting that kind of content, people with
| similar demographics to the child not reading it, and a child who
| gets the recommendation not reading it and acting it out. If
| those assumptions fail should I or my employer be liable?
| depingus wrote:
| Isn't it usually the case that when someone builds a shitty
| thing and people get hurt, the builder is liable?
| ineedaj0b wrote:
| Yeah, but buying a hammer and hitting yourself with it is
| different.
|
| The dangers of social media are unknown to most still.
| depingus wrote:
| Yes. Buying a hammer and hitting yourself with it IS
| different.
| spacemadness wrote:
| Yes, because a mechanical tool made of solid metal is the
| same thing as software that can change its behavior at any
| time and is controlled live by some company with its own
| motives.
| ThunderSizzle wrote:
| It's be more akin to buying a hammer and then the hammer
| starts morphing into a screw driver without you noticing.
|
| Then when you accidentally hit your hand with the hammer,
| you actually stabbed yourself. And that's when you realized
| your hammer is now a screwdriver.
| x0x0 wrote:
| You would think so, wouldn't you?
|
| Except right now youtube have a self advertisement in the
| middle of the page warning people _not to trust the content
| on youtube_. A company warning people not to trust the
| product they built and the videos they choose to show you...
| we need to rethink 230. We 've gone seriously awry.
| tines wrote:
| It's more nuanced than that. If I sent a hateful letter
| through the mail and someone gets hurt by it (even
| physically), who is responsible, me or the post office?
|
| I know youtube is different in important ways than the
| post, but it's also different in important ways from e.g.
| somebody who builds a building that falls down.
| mihaaly wrote:
| Yes.
|
| Or you do things that gives you rewards - and do not care what
| it will result otherwise - but you want to be saved from any
| responsibility (automatically!) for what it causes just because
| it is an algorithm?
|
| The enjoying the benefits but running away from responsibility
| is a cowardly and childish act. Childish acts need supervision
| from adults.
| EasyMark wrote:
| What happened to that child is on the parents not some
| programmer who coded an optimization algorithm. It's really
| as simple as that. No 10 year old should be on TikTok, I'm
| not sure anyone under 18 should be given the garbage,
| dangerous misinformation, intentional disinformation, and
| lack of any ability to control what your child sees.
| itishappy wrote:
| Do you feel the same way about the sale of alcohol? I do
| see the argument for parental responsibility, but I'm not
| sure how parents will enforce that if the law allows people
| to sell kids alcohol free from liability.
| tines wrote:
| This is a good argument I didn't think of before. What's
| the response to it?
| Flozzin wrote:
| We regulate the sale of all sorts of things that can do
| damage but also have other uses. You can't buy large
| amounts of certain cold medicines, and you need to be an
| adult to do so. You can't buy fireworks if you are a
| minor in most places. In some countries they won't even
| sell you a set of steak knives if you are underage.
|
| Someone else's response was that a 10 year old should not
| be on ticktoc. Well then how did they get past the age
| restrictions?(I'm guessing its a check box at best). So
| its inadequately gated. But really, I don't think its the
| sort of thing that needs an age gate.
|
| They are responsible for a product that is actively
| targeting harmful behavior at children and adults. It's
| not ok in either situation. You cannot allow your
| platform to be hijacked for content like this. Full stop.
|
| These 'services' need better ways to moderate content. If
| that is more controls that allow them to delete certain
| posts and videos or some other method to contain videos
| like this. You cannot just allow users to upload and
| share whatever they want. And further, have your own
| systems promote these videos.
|
| Everyone who makes a product(especially for mass
| consumption), has a responsibility to make sure their
| product is safe. If your product is so complicated that
| you can't control it, then you need to step back and re-
| evaluate how it's functioning. Not just plow ahead,
| making money, letting it harm people.
| EasyMark wrote:
| Alcohol (the consumption form) serves only one purpose to
| get you buzzed. Unlike algorithms and hammers which are
| generic and serve many purposes, some of which are
| positive, especially when used correctly. You can't sue
| the people who make hammers if someone kills another
| person with one.
| itishappy wrote:
| It sounds like your algorithm targets children with unmoderated
| content. That feels like a dangerous position with potential
| for strong arguments in either direction. I think the only
| reasonable advice here is to keep close tabs on this case.
| drpossum wrote:
| You sure are if you knew about it (like tiktok was)
| troyvit wrote:
| Right?
|
| Like if I'm a cement company, and I build a sidewalk that's
| really good and stable, stable enough for a person to plant a
| milk crate on it, and stand on that milk crate, and hold up a
| big sign that gives clear instructions on self-asphyxiation,
| and a child reads that sign, tries it out and dies, am I going
| to get sued? All I did was build the foundation for a platform.
| averageRoyalty wrote:
| That's not a fair analogy though. To be fairer, you'd have to
| monitor said footpath 24/7 and have a robot and/or a number
| of people removing milk crate signs that you deemed
| inappropriate for your foothpath. They'd also move various
| milk crate signs in front of people as they walked and hide
| others.
|
| If you were indeed monitoring the footpath for milk crate
| signs and moving them, yes you may be liable for showing or
| not removing one to someone it wouldn't be appropriate for.
| troyvit wrote:
| That's a good point, and actually the heart of the issue,
| and what I missed.
|
| In my analogy the stable sidewalk that can hold the milk
| crate is both the platform and the optimization algorithm.
| But to your point there's actually a lot more going on with
| the optimization than just building a place where any rando
| can market self-asphyxiation. It's about how they willfully
| targeted people with that content.
| awongh wrote:
| I think it depends on some technical specifics, like which meta
| data was associated with that content, and the degree to which
| that content was surfaced to users that fit the demographic
| profile of a ten year old child.
|
| If your algorithm decides that things in the 90th percentile of
| shock value will boost engagement to a user profile that can
| also include users who are ten years old then you maybe have
| built a negligent algorithm. Maybe that's not the case in this
| particular instance but it could be possible.
| thinkingtoilet wrote:
| Of course you should be. Just because an algorithm gave you an
| output doesn't absolve you from using it. It's some magical
| mystical thing. It's something you created and you are 100%
| responsible for what you do with the output of it.
| thih9 wrote:
| Yes, if a product actively contributes to child fatalities then
| the manufacturer should be liable.
|
| Then again, I guess your platform is about article
| recommendation and not about recording yourself doing popular
| trends. And perhaps children are not your target audience, or
| an audience at all. In many ways the situation was different
| for TikTok.
| dwallin wrote:
| The link to the actual decision:
| https://cases.justia.com/federal/appellate-courts/ca3/22-306...
| nitwit005 wrote:
| I am puzzled why there are no arrests in this sort of case.
| Surely, convincing kids to kill themselves is a form of homicide?
| Animats wrote:
| This turns on what TikTok "knew":
|
| _" But by the time Nylah viewed these videos, TikTok knew that:
| 1) "the deadly Blackout Challenge was spreading through its app,"
| 2) "its algorithm was specifically feeding the Blackout Challenge
| to children," and 3) several children had died while attempting
| the Blackout Challenge after viewing videos of the Challenge on
| their For You Pages. App. 31-32. Yet TikTok "took no and/or
| completely inadequate action to extinguish and prevent the spread
| of the Blackout Challenge and specifically to prevent the
| Blackout Challenge from being shown to children on their [For You
| Pages]." App. 32-33. Instead, TikTok continued to recommend these
| videos to children like Nylah._"
|
| We need to see another document, "App 31-32", to see what TikTok
| "knew". Could someone find that, please? A Pacer account may be
| required. Did they ignore an abuse report?
|
| See also Gonzales vs. Google (2023), where a similar issue
| reached the U.S. Supreme Court.[1] That was about whether
| recommending videos which encouraged the viewer to support the
| Islamic State's jihad led someone to go fight in it, where they
| were killed. The Court rejected the terrorism claim and declined
| to address the Section 230 claim.
|
| [1] https://en.wikipedia.org/wiki/Gonzalez_v._Google_LLC
| Scaevolus wrote:
| IIRC, TikTok has (had?) a relatively high-touch content
| moderation pipeline, where any video receiving more than a few
| thousand views is checked by a human reviewer.
|
| Their review process was developed to hit the much more
| stringent speech standards of the Chinese market, but it opens
| them up to even more liability here.
|
| I unfortunately can't find the source articles for this any
| more, they're buried under "how to make your video go viral"
| flowcharts that elide the "when things get banned" decisions.
| itsdrewmiller wrote:
| I don't think any of that actually matters for the CDA
| liability question, but it is definitely material in whether
| they are found guilty assuming they can be held liable at all.
| drpossum wrote:
| I hope this makes certain streaming platforms liable for the
| things certain podcast hosts say while they shovel money at and
| promote them above other content.
| blueflow wrote:
| Might be a cultural difference (im not from the US), but leaving
| a 10 year unsupervised with content from (potentially malicious)
| strangers really throws me off.
|
| Wouldn't this be the perfect precedence case on why minors should
| not be allowed on social media?
| hyeonwho4 wrote:
| I am also a little confused by this. I thought websites were
| not allowed to collect data from minors under 13 years of age,
| and that TikTok doesn't allow minors under 13 to create
| accounts. Why is TikTok not liable for personalizing content to
| minors? Apparently (from the court filings) TikTok even knew
| these videos were going viral among children... which should
| increase their liability under the Children's Online Privacy
| Protection Act.
| ratorx wrote:
| Assuming TikTok collect age, and the minimum possible age is
| 13 (ToS) and a parent lets their child access the app despite
| that, I don't see how TikTok is liable.
|
| Also, I'm not sure how TikTok would know that the videos are
| viral among the protected demographic if the protected
| demographic cannot even put in the information to classify
| them as such?
|
| I don't think requiring moderation is the answer in all
| cases. As an adult, I should be allowed to consume
| unmoderated content. Should people younger than 18 be allowed
| to? Maybe.
|
| I agree that below age X, all content should be moderated. If
| you choose not to do this for your platform, then age-
| restrict the content. However, historically age-restriction
| on the internet is an unsolved problem. I think what would be
| useful is tighter legislation on how this is enforced etc.
|
| This case is not a moderation question. It is a liability
| question, because a minor has been granted access to age-
| restricted content. I think the key question is whether
| TikTok should be liable for the child/their parents having
| bypassed the age restriction (too easily)? Maybe. I'm leaning
| towards the opinion that a large amount of this
| responsibility is on the parents. If this is onerous, then
| the law should legislate stricter guidelines on content
| targeting the protected demographic as well as the gates
| blocking them.
| Yeul wrote:
| Look your kids are going to discover all kinds of nasty things
| online or offline so either you prepare them for it or it's
| going to be like that scene in Stephen King's Carrie.
| EasyMark wrote:
| You are correct. US parents often use social media as a baby
| sitter and don't pay attention to what they are watching. No 10
| year old should be on social media or even the internet in an
| unsupervised manner; they are simply too impressionable and
| trusting. It's just negligence, my kids never got SM accounts
| before 15, after I'd had time to introduce them to some common
| sense and much needed skepticism of people and information on
| the internet.
| carapace wrote:
| Moderation doesn't scale, it's NP-complete or worse. Massive
| social networks _sans_ moderation cannot work and cannot be made
| to work. Social networks require that the moderation system is a
| super-set of the communication system and that 's not cost
| effective (except where the two are co-extensive, e.g. Wikipedia,
| Hacker News, Fediverse.) We tried it because of ignorance (in the
| first place) and greed (subsequently). This ruling is just
| recognizing reality.
| LargeWu wrote:
| This isn't a question of moderation. It's about recommendation.
| 2OEH8eoCRo0 wrote:
| Fantastic! If I had three wishes, one of them might be to repeal
| Section 230.
| BurningFrog wrote:
| Surely this will bubble up to the Supreme Court?
|
| Once they've weighed in, we'll know if the "free ride" really is
| over, and if so what ride replaces it.
| barryrandall wrote:
| I think there are a few very interesting ways this could play
| out.
|
| Option 1: ByteDance appeals, loses, and the ruling stands
|
| Option 2: ByteDance appeals, wins, and the ruling is overturned
|
| Option 3: ByteDance doesn't appeal, the ruling stands, and
| nobody has standing to appeal the ruling without bringing a new
| case.
| 2OEH8eoCRo0 wrote:
| I love this.
|
| Court: Social Media algos are protected speech
|
| Social Media: Yes! Protect us
|
| Court: Since you're speech you must be liable for harmful speech
| as anyone else would be
|
| Social Media: No!!
| srackey wrote:
| Ah yes "social media bad". Lemme guess, "Orange man bad" too?
|
| You're cheering on expansion of government power and the end of
| the free internet as we know it.
| stainablesteel wrote:
| tiktok in general is great at targeting young women
|
| the chinese and iranians are taking advantage of this and thats
| not something i would want to entrust to them
| 6gvONxR4sf7o wrote:
| So under this new reading of the law, is it saying that AWS is
| still not liable for what someone says on reddit, but now reddit
| might be responsible for it?
| jmyeet wrote:
| What I want to sink in for people that whenever people talk about
| an "algorithm", they're regurgitating propaganda specifically
| designed to absolve the purveyor of responsibility for anything
| that algorithm does.
|
| An algorithm in this context is nothing more than a reflection of
| what all the humans who created it designed it to do. In this
| case, it's to deny Medicaid to make money. For RealPage, it's to
| drive up rents for profit. Health insurance companies are using
| "AI" to deny claims and prior authorizations, forcing claimants
| to go through more hoops to get their coverage. Why? Because the
| extra hoops will discourage a certain percentage.
|
| All of these systems come down to a waterfall of steps you need
| to go through. Good design will remove steps to increase the pass
| rate. Intentional bad design will add steps and/or lower the pass
| rate.
|
| Example: in the early days of e-commerce, you had to create an
| account before you could shop. Someone (probably Amazon) realized
| they lost customers this way. The result? You could create a
| shopping cart all you want and you didn't have to create an
| account unti lyou checked out. At this point you're already
| invested. The overall conversion rate is higher. Even later,
| registration itself became optional.
|
| Additionally, these big consulting companies are nothing more
| than leeches designed to drain the public purse
| 2OEH8eoCRo0 wrote:
| I like it. What would be a better word than algorithm then?
| Design? Product?
|
| TikTok's design presented harmful information to a minor
| resulting in her death.
|
| TikTok's product presented harmful information to a minor
| resulting in her death.
| hnburnsy wrote:
| To me this decision doesn't feel it is demolishing 230, but
| reducing its scope, a scope that was exanded by other court
| decisions. Per the article 230 said not liable for user content
| and not liable for restricting content. This case is about
| liability for reinforcing content.
|
| Would love to have a timeline only, non reinforcing content feed.
| kevwil wrote:
| Whatever this means, I hope it means less censorship. That's all
| my feeble brain can focus on here: free speech good, censorship
| bad. :)
| EasyMark wrote:
| This judge supports censorship and not free speech, it a
| tendency of the current generation of judges populating the
| court. They prefer government control over personal
| responsibility in most cases, especially the more conservative
| they get.
| endtime wrote:
| Not that it matters, but I was curious and so I looked it up: the
| three-judge panel comprised one Obama-appointed judge and two
| Trump-appointed judges.
| drbojingle wrote:
| There's no reason,as far as I'm concerned, that we shouldn't have
| a choice in algorithms on social media platforms. I want to be
| able to pick an open source algorithm that i can understand the
| pros and cons of. Hell let me pick 5. Why not?
| phendrenad2 wrote:
| I have no problem with this. Section 230 is almost 100 years old,
| long before anyone could have imagined an ML algorithm curating
| user content.
|
| Section 230 absolutely should come with an asterisk that if you
| train an algorithm to do your dirty work you don't get to claim
| it wasn't your fault.
| game_the0ry wrote:
| Pavel gets arrested, Brazil threatens Elon, now this.
|
| I am not happy with how governments think they can dictate what
| internet users can and cannot see.
|
| With respect to TikTok, parents need have some discipline and not
| give smart phones to their ten-year-olds. You might as well give
| them a crack pipe.
| CuriouslyC wrote:
| _shrug_ maybe our communication protocols should be distributed
| and not owned by billionaires. That would solve this problem
| neatly.
| ratorx wrote:
| I think a bigger issue in this case is the age. A 10-year old
| should not have access to TikTok unsupervised, especially when
| the ToS states the 13-year age threshold, regardless of the law's
| opinion on moderation.
|
| I think especially content for children should be much more
| severely restricted, as it is with other media.
|
| It's pretty well-known that age is easy to fake on the internet.
| I think that's something that needs tightening as well. I'm not
| sure what the best way to approach it is though. There's a
| parental education aspect, but I don't see how general content on
| the internet can be restricted without putting everything behind
| an ID-verified login screen or mandating parental filters, which
| seems quite unrealistic.
| Terr_ wrote:
| [delayed]
| skeptrune wrote:
| My interpretation of this is it will push social media companies
| to take a less active role in what they recommend to their users.
| It should not be possible to intentionally curate content while
| simultaneously avoiding the burden of removing content which
| would cause direct harm justifying a lawsuit. Could not be more
| excited to see this.
| Smithalicious wrote:
| Hurting kids, hurting kids, hurting kids -- but, of course, there
| is zero chance any of this makes it to the top 30 causes of child
| mortality. Much to complain about with big tech, but children
| hanging themselves is just an outlier.
| trinsic2 wrote:
| When I see CEO's, CFO's going to prison for the actions of there
| corporations, then I'll believe laws actually make things better.
| Otherwise any court decisions that say some action is now illegal
| is just posturing.
___________________________________________________________________
(page generated 2024-08-29 23:00 UTC)