[HN Gopher] Abuse prevention is tradecraft
___________________________________________________________________
Abuse prevention is tradecraft
Author : ColinHayhurst
Score : 100 points
Date : 2022-10-18 08:47 UTC (14 hours ago)
(HTM) web link (alecmuffett.com)
(TXT) w3m dump (alecmuffett.com)
| fdfaz wrote:
| jrm4 wrote:
| None of this tech nerdery means a whole lot without "skin in the
| game." First, ensure that we have real liability and/or
| regulation in place, similar to the FDA and such, and THEN begin
| to work on solutions. I'm certain answers will reveal themselves
| much quicker.
| wmf wrote:
| There's a lot of debate about liability right now in the
| context of section 230 and it's not obvious to me that more
| liability will create better outcomes. It could just as easily
| lead to either an unmoderated digital hellscape or all social
| media being shut down.
| BrainVirus wrote:
| That's all nice and well, but I no longer can tell the difference
| between Big Tech's "Abuse Prevention" and abuse. They need
| transparency not because it's going to make their job easier.
| They need transparency because literally millions of people hate
| their companies and don't have one iota of trust in their
| internal decision-making. Big Tech workers might think all those
| people are morons and can be ignored indefinitely. In reality, it
| simply doesn't work this way.
| nindalf wrote:
| You think you want transparency and that it'll make you trust
| them, but it won't. Even if you found out how those decisions
| are made, it won't make a difference.
|
| Here's something I wrote a couple days ago
| (https://news.ycombinator.com/item?id=33224347). It'll tell you
| how one component of Meta's content moderation works. Read it
| and tell me if it made you suddenly increase the level of trust
| you have in them.
|
| What will actually happen is that you'll cherry pick the parts
| that confirm your biases. Happy to proven wrong here.
| margalabargala wrote:
| Reading this article does, in fact, increase my trust that my
| Facebook account won't be randomly, irrevocably banned one
| day a la google.
|
| The trouble is, that's not my primarily distrusted thing
| about facebook; I don't trust that the power they have to
| shape people's opinions by deciding what to show them, won't
| be abused to make people think things that are good for
| facebook but bad for society at large.
|
| So while that article does increase my trust in facebook in
| general, the magnitude of that increase is miniscule, because
| what it addresses is not the reason for lack of trust.
|
| But you're right that transparency wouldn't solve that.
| Because it's only the first step. If facebook were to
| transparently say "we are promoting far right conspiracy
| theories because it makes us more money", and provide a
| database of exactly which things they were boosting, while
| perhaps I would "trust" them, I certainly wouldn't "like"
| them.
| diebeforei485 wrote:
| I think one of the main benefits of transparency is
| disincentivizing shady behavior in the first place. Sunlight
| makes the cockroaches go away, etc.
| tb_technical wrote:
| If it won't make a difference what's the harm in being
| transparent, then?
| Arainach wrote:
| User trust doesn't increase. The ability of bad actors to
| craft malicious content that circumvents detection
| skyrockets.
| beauHD wrote:
| > You think you want transparency
|
| It would be nice to know why the meme I posted got flagged
| because it didn't meet Facebook's vague 'Community
| Standards'. These platforms are enormous black boxes where
| their decision is final and there is no way to appeal, short
| of literally going into their building and asking to talk to
| the manager, which is outside of many people's scope, and not
| worth the effort. They would rather let content get censored
| than go out of their way to appeal.
| im3w1l wrote:
| Transparency is the first step. The second step is forcing
| them to change their processes. After that finishes, that's
| when they will be trusted.
| giantg2 wrote:
| I think they could both be right. Sure, you don't want to give
| away the technical tells (TLS client version, etc). But if
| something is being moderated for it's actual content, then I
| think it could be beneficial to say why. While you don't want
| nefarious groups corrupting the public perception through
| misinformation, you also don't want platforms doing this by
| suppressing legitimate speech.
| mellosouls wrote:
| This is the essay he is responding to, for some reason he links
| to the tweet plugging it instead:
|
| https://doctorow.medium.com/como-is-infosec-307f87004563
| Animats wrote:
| Right. Read that first. Also the Santa Clara Principles that
| Doctorow mentions.[1]
|
| Now, a key point there is freedom from arbitrary action. The
| Santa Clara Principles have a "due process clause". They call
| for an appeal mechanism, although not external oversight. Plus
| statistics and transparency, so the level of moderation
| activity is publicly known, to keep the moderation system
| honest.
|
| That's really the important part. The moderation process is
| usually rather low-quality, because it's done either by dumb
| automated systems or people in outsourced call centers. So a
| correction mechanism is essential.
|
| It's failure to correct such errors that get companies
| mentioned on HN, in those "Google cancelled my account for -
| what?"
|
| The "Abuse prevention is tradecraft" author has hold of the
| wrong end of the problem.
|
| [1] https://santaclaraprinciples.org/
| wmf wrote:
| Note that Facebook has the Oversight Board to handle appeals
| and I assume such appeals must necessarily reveal the
| underlying decision making process.
| https://www.oversightboard.com/
|
| Google is much worse since they have no appeals.
| Zak wrote:
| > _I assume such appeals must necessarily reveal the
| underlying decision making process._
|
| Probably not the parts they keep secret. The Oversight
| Board can make a decision about content based on the
| content itself and publicly-available context.
|
| What tells the automated system that flagged it initially
| used don't need to be revealed, and the feedback from the
| Oversight Board probably isn't "make these detailed changes
| to the abuse detector algorithm" but a more generalized
| "don't remove this kind of stuff".
| jgmrequel wrote:
| I believe he linked to the tweet because the article is Medium
| members only right now instead of public.
| neonate wrote:
| https://archive.ph/VDwlk
| authpor wrote:
| uff, this is a complicated topic.
|
| > _I'd like to see better in the public debate._
|
| I'm having a complicated thought... the same points he talks
| about information asymetry in relation to the preservation of
| value are at play in the political (i.e. public) games.
|
| I didn't even know there were santa clara principles, in a rough
| sense, this is maintining some sort of value from the people who
| have read those to them who don't even know about such
| principles.
|
| I seem to be thinking that information assymetry is statecraft, a
| "super-set" of the notion of abuse prevention (IA and security
| through obscurity) as trade craft (because the state contains the
| market/trade)
|
| ...
| salawat wrote:
| YES.
|
| You are now stumbling into the dirty secret of how a large part
| of the world works, and the #1 priority for remediation if you
| have even a modicum of intention to make inroads at all into
| substantially changing things.
|
| Info Asymmetry is the basis of power/entrenchment.
| gxt wrote:
| Content hosters, YouTube, Facebook, twitter, etc. Need to
| Delegate moderation to 3rd parties and allow users to choose
| which third party they want moderation from. They should only
| take action for everyone when they are legally required to.
| Zak wrote:
| If you want that, you can get most of the way there with
| ActivityPub (Mastodon/Pixelfed/Friendica/etc...) and your
| choice of service provider. The problem, of course is that the
| big social platforms so dominate content discovery that things
| not shared there are unlikely to find a large audience.
| snarkerson wrote:
| I thought content moderation was censorship.
|
| Now post that XKCD comic.
| amadeuspagel wrote:
| This distinction between informational asymmetry and security
| through obscurity seems artificial. Doesn't security through
| obscurity rely on informational asymmetry by definition? What is
| the distinction here? It would be more honest to say that
| security through obscurity sometimes works, and sometimes doesn't
| have an alternative. And bot prevention is such a case. I'm not
| aware of any open source bot prevention system that works against
| determined attackers.
|
| Any real world security system relies to some extent on security
| through obscurity. No museum is going to publish their security
| system.
|
| It's only in the digital world that certain things, such as
| encryption, can be secure even under conditions where an
| adversary understands the entire system, so that security through
| obscurity in that context is frowned upon because it shouldn't be
| necessary.
|
| But this is a special case. Security is mostly a red queens race,
| and "obscurity" or "informational asymmetry" is an advantage the
| defenders have.
| marcosdumay wrote:
| > This distinction between informational asymmetry and security
| through obscurity seems artificial.
|
| It is. What is meaningful is how much entropy you are hiding
| from your wanna-be attackers and what costs you pay for it.
|
| Experts call it "security through obscurity" when the
| information content is low and the price is high.
| 3pt14159 wrote:
| > I'm not aware of any open source bot prevention system that
| works against determined attackers.
|
| It works just fine if you're willing to move to an invite only
| system and ban not just the bot, but the person that invited
| them. Possibly even another level up.
|
| The problem with this system is that it leads to _much_ less
| inflated numbers about active users, etc. So very few companies
| do it.
| carbotaniuman wrote:
| Such a system is still vulnerable (I'd daresay even more so)
| to account takeovers. And it might even have cascading
| effects depending on how your ban one level up goes. For a
| first approximation, even if one user can only invite 2
| users, exponential growth will mean that bots may still pose
| a problem.
| 3pt14159 wrote:
| > vulnerable (I'd daresay even more so) to account
| takeovers.
|
| Not more so. Vulnerability is a function of defensive
| capacity. There is no reduced defensive capacity. If
| anything, knowing who invited whom can allow one to allow
| web-of-trust based checks on suspicious login, allowing for
| more stringent guards.
|
| > For a first approximation, even if one user can only
| invite 2 users, exponential growth will mean that bots may
| still pose a problem.
|
| In these types of systems users earn invites over time and
| as a function of positive engagement with other trusted
| members. Exponential growth is neutered in such systems
| because the lag for bad actors and the natural pruning of
| the tree for bots and other abusive accounts, leads to a
| large majority of high quality trusted accounts. This means
| that content flagging is much more reliable.
|
| So, yes, bots are still a (minor) problem, but the system
| as a whole is much more robust and unless there is severe
| economic incentive to do so, most bot operators understand
| that the lower hanging fruit is elsewhere.
| pixl97 wrote:
| You misunderstand some of the vulnerabilities then. Bad
| actors on the systems are not the only weaknesses of the
| system.
|
| Other systems are potential weaknesses of your system....
| But what do I mean by that?
|
| If other systems have better ease of use while blocking
| 'enough' bad actors it is likely your exceptionally
| defensive system will fail.
|
| "I got blocked from SYSTEM1 for no reason, hey everyone,
| lets go to SYSTEM2", this is risky if one of the blocked
| people is high visibility, and these kinds of accounts
| tend lead the operator to make special hidden rules that
| tend to fall under security by obscurity of the rules.
| nonrandomstring wrote:
| The concept that both Alec and Cory are dancing around but do
| not name directly is basically Kerckhoffs's principle [1].
|
| They're both right: Alec in saying that open detailed knowledge
| of moderation algorithms would harm the mission, and Cory for
| saying that a protocol/value level description of moderation
| gives insufficient assurance.
|
| That's because abuse detection isn't cryptography in which the
| mechanism can be neatly separated from a key.
|
| [1] https://en.wikipedia.org/wiki/Kerckhoffs%27s_principle
| jasode wrote:
| _> This distinction between informational asymmetry and
| security through obscurity seems artificial. Doesn't security
| through obscurity rely on informational asymmetry by
| definition?_
|
| It depends on your definition:
|
| - (1) "security through obscurity" is an unbiased term with no
| negative connotations which simply describes a situation
| without judgement. Parsing that phrase in this purely logical
| way means "information asymmetry" is a distinction without a
| difference. This neutral meaning is what your comment is
| highlighting.
|
| ... or ...
|
| - (2) "security through obscurity" is a _negative cultural
| meme_ and the recipients of that phrase are _people who are
| incompetent_ in understanding security concepts. E.g. they don
| 't realize that it's a flaw to hide the password key in a
| config file in a undocumented folder and hope the hackers don't
| find it. It's this _STO-the-negative-meme_ that the blog post
| is trying to distance itself from by emphasizing a alternative
| phrase _" informational asymmetry"_. Keeping the exact
| moderation rules a "secret" is IA-we-know-what-we're-doing --
| instead of -- STO-we're-idiots.
|
| The blog author differentiating from (2) because that's the
| meaning Cory Doctorow used in sentences such as _" In
| information security practice, "security through obscurity" is
| considered a fool's errand."_ :
| https://www.eff.org/deeplinks/2022/05/tracking-exposed-deman...
| Gordonjcp wrote:
| I and I suspect many like me realise that the truth lies
| somewhere in the middle.
|
| Do I use security-by-obscurity? Of course! I know that my
| server is going to get hammered with attempts to steal info,
| and I can see /path/to/webroot/.git/config getting requested
| several times an hour, so I don't put important stuff in
| places where it might be accessed. Even giving it a silly
| name won't help, it has to be simply not something that's
| there at all. That kind of security-by-obscurity is asking
| for trouble.
|
| Sure as hell though, if I move ssh off of port 22 then the
| number of folk trying to guess passwords drops to *zero*,
| instantly.
| creeble wrote:
| >Sure as hell though, if I move ssh off of port 22 then the
| number of folk trying to guess passwords drops to _zero_ ,
| instantly.
|
| But not for very long. But it doesn't matter, but for log
| annoyances.
| brudgers wrote:
| _Informational Asymmetry (IA) is not the same as STO, and it's a
| fundamental of Information Security_
|
| That made reading the article worthwhile for me.
|
| I mean, what else is a secret but informational asymmetry?
| aidenn0 wrote:
| Except the examples given of IA are so broad as to eliminate
| the distinction between IA and STO. Knowing a value that is in
| a space larger than 2^64 possibilities is qualitatively
| different than knowning something in a space of only millions
| of possibilities. The real difference with como is that it's a
| cat-and-mouse game (or Red Queens race as another commenter
| said).
|
| It's more like being able to teleport all the keys in all the
| houses from under the doormat to under a rock in the garden
| once you notice thieves are checking the doormat. This would,
| in fact, appreciably increase house security on average, while
| still being STO.
| aidenn0 wrote:
| Upon further reflection, the question is "how hard is it to
| find the needle in the haystack"
|
| If you use a 128 bit key, but use a non-time-constant compare
| somewhere, then it's pretty darn easy to find the needle.
|
| This is why the JPEG fingerprinting example from TFA doesn't
| qualify to be in the same category as a properly secured
| cryptographic key. They can notice that non-picture posts are
| not blocked, but picture posts are, which already greatly
| narrows it down. They could post a picture generated from the
| actual client, and see it go through, and narrow it down even
| more. That's not even that hard of a one for an attacker to
| figure out. It's much closer to "key under doormat" than
| "random key"
| faeriechangling wrote:
| Having moderated it is obvious to any moderator that a bit of
| opaqueness goes a long way, the reasons that posts get filtered
| as spam is never publicly disclosed for instance.
|
| However I don't really know if secret courts where posts are
| removed and people are banned based on secret laws are really the
| way to go regardless of their effectiveness because of Facebooks
| claims of benevolence.
| ynbl_ wrote:
| tptacek wrote:
| People get super confused about the differences between abuse
| prevention, information security, and cryptography.
|
| For instance, downthread, someone cited Kerckhoffs's principle,
| which is the general rule that cryptosystems should be secure if
| all information about them is available to attackers short of the
| key. That's a principle of cryptography design. It's not a rule
| of information security, or even a rule of cryptographic
| information security: there are cryptographically secure systems
| that gain security through the "obscurity" of their design.
|
| If you're designing a general-purpose cipher or cryptographic
| primitive, you are of course going to be bound by Kerckhoff's
| principle (so much so that nobody who works in cryptography is
| ever going to use the term; it goes without saying, just like
| people don't talk about "Shannon entropy"). The principle
| produces stronger designs, all things being equal. But if you're
| designing a purpose-build bespoke cryptosystem (don't do this),
| _and_ all other things are equal (ie, the people doing the design
| and the verification work are of the same level of expertise as
| the people whose designs win eSTREAM or CAESAR or whatever), you
| might indeed bake in some obscurity to up the costs for
| attackers.
|
| The reason that happens is that unlike cryptography as, like, a
| scientific discipline, practical information security is about
| costs: it's about asymmetrically raising costs for attackers to
| some safety margin above the value of an attack. We forget about
| this because in most common information security settings,
| infosec has gotten sophisticated enough that we can trivially
| raise the costs of attacks beyond any reasonable margin. But
| that's not always the case! If you can't arbitrarily raise
| attacker costs at low/no expense to yourself, or if your
| attackers are incredibly well-resourced, then it starts to make
| sense to bake some of the costs of information security into your
| security model. It costs an attacker money to work out your
| countermeasures (or, in cryptography, your cryptosystem design).
| Your goal is to shift costs, and that's one of the levers you get
| to pull.
|
| Everybody --- I think maybe literally everybody --- that has done
| serious anti-abuse work after spending time doing other
| information security things has been smacked in the face by the
| way anti-abuse is entirely about costs and attacker/defender
| asymmetry. It is simply very different from practical Unix
| security. Anti-abuse teams have constraints that systems and
| software security people don't have, so it's more complicated to
| raise attacker costs arbitrarily, the way you could with, say, a
| PKI or a memory-safe runtime. Anti-abuse systems all tend to rely
| heavily on information asymmetry, coupled with the defender's
| ability to (1) monitor anomalies and (2) preemptively change
| things up to re-raise attacker costs after they've cut their way
| through whatever obscure signals you're using to detect them.
|
| Somewhere, there's a really good Modern Cryptography mailing list
| post from... Mike Hamburg? I think? I could be wrong there ---
| about the Javascript VM Google built for Youtube to detect and
| kill bot accounts. I'll try to track it down. It's probably a
| good example --- at a low level, in nitty-gritty technical
| systems engineering terms, the kind we tend to take seriously on
| HN --- of the dynamic here.
|
| I don't have any position on whether Meta should be more
| transparent or not about their anti-abuse work. I don't follow it
| that closely. But if Cory Doctorow is directly comparing anti-
| abuse to systems security and invoking canards about "security
| through obscurity", then the subtext of Alec Muffett's blog post
| is pretty obvious: he's saying Doctorow doesn't know what the
| hell he's talking about.
| heisenbit wrote:
| I associate infosec with code. I associate content moderation
| with humans. Where things get challenging is when code is doing
| content moderation. The executive privilege I extend to human
| content moderators to discuss in private and not to explain their
| decision becomes a totally different thing when extended to code.
| beauHD wrote:
| AKA algorithmic black boxes that have the last word without
| human intervention. Welcome to Skynet
| packetslave wrote:
| Except that's not how content moderation works. Welcome to
| strawman.
| photochemsyn wrote:
| I think the problem is that if Facebook, Twitter and similar
| platforms were to publicly present an unambiguous defintion of
| what 'abusive content' is, then it would become fairly clear that
| they're engaging in selective enforcement of that standard based
| on characteristics of the perpetrator such as: market power,
| governmental influence, number of followers, etc.
|
| For example, if the US State Department press releases start
| getting banned as misinformation, much as Russian Foreign
| ministry press releases might be, then I think this would result
| in a blowback detrimental to Facebook's financial interests due
| to increased governmental scrutiny. Same for other 'trusted
| sources' like the NYTimes, Washington Post, etc., who have the
| ability to retaliate.
|
| Now, one solution is just to lower the standard for what's
| considered 'abusive' and stop promoting one government's
| propaganda above anothers, and focus on the most obvious and
| blatant examples of undesirable content (it's not that big of a
| list), but then, this could upset advertisers who don't want to
| be affiliated with such a broad spectrum of content, again
| hurting Facebook's bottom line.
|
| Once again, an opportunity arises to roll out my favorite quote
| from Conrad's Heart of Darkness:
|
| " _There had been a lot of such rot let loose in print and talk
| just about that time, and the excellent woman, living right in
| the rush of all that humbug, got carried off her feet. She talked
| about 'weaning those ignorant millions from their horrid ways,'
| till, upon my word, she made me quite uncomfortable. I ventured
| to hint that the Company was run for profit._ "
___________________________________________________________________
(page generated 2022-10-18 23:01 UTC)