Posts by det@hachyderm.io
 (DIR) Post #ATJgkpd0JPOIGy8xs0 by det@hachyderm.io
       2023-03-05T20:31:57Z
       
       0 likes, 0 repeats
       
       @alex @mimsical Oh FFS — yes, clearly the most egregious thing here is the entirely accurate headline.
       
 (DIR) Post #ATJhpWxXFXl5m4YFSC by det@hachyderm.io
       2023-03-05T20:44:00Z
       
       0 likes, 0 repeats
       
       @alex @mimsical If there is any fault with the article, it’s that it doesn’t focus on solutions — its description of the problem is entirely accurate, and it’s a problem that the public needs to be aware of. It also acknowledges that Meta has few legal options. Of all of the possible concerns here, “it makes tech companies look bad” is just not one that matters.
       
 (DIR) Post #ATJj4oZS9Vokh48gDI by det@hachyderm.io
       2023-03-05T20:57:57Z
       
       0 likes, 0 repeats
       
       @alex @mimsical I mean, you literally just detailed several ways they could turn off the spigot, which was spurred by the article bringing attention to the issue. I’m not seeing the downside here.
       
 (DIR) Post #AY5NavS7yQqfg6kx0a by det@hachyderm.io
       2023-07-24T14:12:58Z
       
       0 likes, 1 repeats
       
       In what is hopefully my last child safety report for a while: a report on how our previous reports on CSAM issues intersect with the Fediverse.https://cyber.fsi.stanford.edu/io/news/addressing-child-exploitation-federated-social-media
       
 (DIR) Post #AY5NayZKO7OHLJ9F9E by det@hachyderm.io
       2023-07-24T14:13:22Z
       
       0 likes, 0 repeats
       
       Similar to how we analyzed Twitter in our self-generated CSAM report, we did a brief analysis of public timelines of prominent servers, processing media with PhotoDNA and SafeSearch. The results were legitimately jaw-dropping: our first pDNA alerts started rolling in within minutes. The true scale of the problem is much larger, as inferred by cross-referencing CSAM-related hashtags with SafeSearch level 5 nudity matches.
       
 (DIR) Post #AY5Nb1U7Xw0YO1jcwq by det@hachyderm.io
       2023-07-24T14:13:34Z
       
       0 likes, 0 repeats
       
       Hits were primarily on a not-to-be-named Japanese instance, but a secondary test to see how far they propagated did show them getting federated to other servers. A number of matches were also detected in posts originating from the big mainstream servers. Some of the posts that triggered matches were removed eventually, but the origin servers did not seem to consistently send "delete" events when that happened, which I hope doesn't mean the other servers just continued to store it.
       
 (DIR) Post #AY5Nb4HT9Qfd3eq492 by det@hachyderm.io
       2023-07-24T14:13:42Z
       
       0 likes, 0 repeats
       
       The Japanese server problem is often thought to mean "lolicon" or CG-CSAM, but it appears that servers that allow computer-generated imagery of kids also attracts users posting and trading "IRL" materials (their words, clear from post and match metadata), as well as grooming and swapping of CSAM chat group identifiers. This is not altogether surprising, but it is another knock against the excuses of lolicon apologists.
       
 (DIR) Post #AY5Nb7FS7Nq8GGuzuy by det@hachyderm.io
       2023-07-24T14:13:54Z
       
       0 likes, 0 repeats
       
       Traditionally the solution here has been to defederate from freezepeach servers and...well, all of Japan. This is commonly framed as a feature and not a bug, but it's a blunt instrument and it allows the damage to continue. With the right tooling, it might be possible to get the large Japanese servers to at least crack down on material that's illegal there (which non-generated/illustrated CSAM is).
       
 (DIR) Post #AY5NbA5dYKlr4hLhXE by det@hachyderm.io
       2023-07-24T14:14:04Z
       
       0 likes, 0 repeats
       
       I have argued for a while that the Fediverse is way behind in this area; part of this lack of tooling and reliance on user reports, but part is architectural. CSAM-scanning systems work one of two ways: hosted like PhotoDNA, or privately distributed hash databases. The former is a problem because all servers hitting PhotoDNA at once for the same images doesn't scale. The latter is a problem because widely distributed hash databases allow for crafting evasions or collisions.
       
 (DIR) Post #AY5NbCq9KNAHbX7sJM by det@hachyderm.io
       2023-07-24T14:14:12Z
       
       0 likes, 0 repeats
       
       I think for this particular issue to be resolved, a couple things need to happen: one, an ActivityPub implementation of content scanning attestation should be developed, allowing the origin servers to perform scanning via a remote service and other servers to verify it happened. Second, for the hash databases that are privately distributed (e.g. Take It Down, NCMEC's NCII database), someone should probably take on making these into a hosted service.
       
 (DIR) Post #AY5NbFlISs48fLsXfE by det@hachyderm.io
       2023-07-24T14:14:22Z
       
       0 likes, 0 repeats
       
       There are some other things that would be helpful in controlling proliferation: for example, easy UI for admins to do hashtag and keyword blocks, instead of relying on users to track a changing threat landscape. These could be distributed or subscription-based across servers, though how public those lists should be is up for debate. That subscription model could also be used for general "fediblock" lists.
       
 (DIR) Post #AY5NbG5VFkEFg1Kg9w by det@hachyderm.io
       2023-07-24T14:14:37Z
       
       0 likes, 0 repeats
       
       A pluggable system for content scanning and classifiers would also be useful. Right now Mastodon has webhooks, which aren't really a great match IMO. Something closer to Pleroma's MRF could be a starting point. Lastly, there's room for better tools for moderation: more specific child safety flows, escalation capabilities, and trauma prevention tools (e.g. default blurring of images in all user reports of CSAM or gore).
       
 (DIR) Post #AY5NbG5VFkEFg1Kg9x by det@hachyderm.io
       2023-07-24T14:14:30Z
       
       0 likes, 0 repeats
       
       Integrated reporting to NCMEC's CyberTipline would make life easier for admins and increase the likelihood that those reports get filed at all. Even without attestation, the big instances should all be using PhotoDNA; it's unclear if anyone on the Fediverse is even doing this, given that they'd have to manually hack it in. UI needs to be added to mainline Mastodon to allow for that—it's a very simple pair of REST calls that just need a couple auth tokens.
       
 (DIR) Post #AY5NbGBWtL37yi9UYK by det@hachyderm.io
       2023-07-24T14:14:44Z
       
       0 likes, 0 repeats
       
       By the way, now that we have big players like Meta entering the Fediverse, it would be great if they could sponsor some development on child safety tooling for Mastodon and other large ActivityPub implementations, as well as work with an outside organization to make a hosted hash database clearinghouse for the Fediverse. It would be quite cheap for them, and would make the ecosystem as a whole a lot nicer.  /thread
       
 (DIR) Post #AitThAIgA0WNtKdeEK by det@hachyderm.io
       2024-06-13T15:58:44Z
       
       0 likes, 1 repeats
       
       So apparently last September Meta sent C&Ds to every open-source project that implemented a Threads API client, basically precluding development of tools that provide any kind of research access. First time I've seen a platform take this tack, and it's pretty obnoxious; particularly given the lack of any dedicated research API.https://github.com/junhoyeo/threads-api