Post ArDXyE3TJFoukEW0G0 by 3bf0c63fcb93463407af97a5e5ee64fa883d107ef9e558472c4eb9aaaefa459d@mostr.pub
 (DIR) More posts by 3bf0c63fcb93463407af97a5e5ee64fa883d107ef9e558472c4eb9aaaefa459d@mostr.pub
 (DIR) Post #ArDXy8JYeX7ewNzlUu by 97f848adcc4c6276685fe48426de5614887c8a51ada0468cec71fba938272911@mostr.pub
       2025-02-10T21:08:15.000Z
       
       0 likes, 0 repeats
       
       #asknostr among the problems that Nostr faces, the child porn problem is a very, very, very bad problem. A VERY bad problem.What is the current thinking  among developers about how to deal with this? Nobody likes censorship, but the only solution I can think of (SO FAR) is running an image identification service that labels dangerous stuff like this, and then broadcasts a list of (images, notes, users?) who are scoring high on the "oh shit this is child porn" metric. Typically these systems just output a float between zero and 1, which is the score....Is anyone working on this currently? I have a good deal of experience of running ML services like image identification at scale, so this could be something interesting to work on for the community. (I also have a lot GPU power, and anyway, if you do it right, this actually doesn't take a ton of GPUs to do even for millions of images per day....)It would seem straightforward to subscribe to all the nostr image uploaders, generate a score with 100 being "definite child porn" and 1 being "not child porn", and then broadcast maybe events of some kind to relays with this "opinion" about the image/media?Maybe someone from the major clients like @20986fb8 or #coracle or @532d830d  or @3efdaebb  has a suggestion on how this should be done. One way or another, this has to be done. 99.99% percent of normies, the first time they see child porn on #nostr ... if they see it once, they'll never come back.....Is there an appropriate NIP to look at? @3bf0c63f ? @fa984bd7 ? @d61f3bc5 ?
       
 (DIR) Post #ArDXy9lxERQnSkHxOi by 3bf0c63fcb93463407af97a5e5ee64fa883d107ef9e558472c4eb9aaaefa459d@mostr.pub
       2025-02-10T22:27:13.000Z
       
       0 likes, 0 repeats
       
       Relays have to become more whitelisted and less open, and clients have to implement outbox model and stop relying on 2 or 3 big relays, then we can just stop worrying about this.
       
 (DIR) Post #ArDXyB4QPFnfUJwDpI by 97f848adcc4c6276685fe48426de5614887c8a51ada0468cec71fba938272911@mostr.pub
       2025-02-10T22:41:34.000Z
       
       0 likes, 0 repeats
       
       Not sure if you are serious or just trolling the idea. But  -- like each individual relay implements its own scoring system? Seems like a ton of duplicated effort.
       
 (DIR) Post #ArDXyC0Yv6IwOd8eRc by 3bf0c63fcb93463407af97a5e5ee64fa883d107ef9e558472c4eb9aaaefa459d@mostr.pub
       2025-02-10T22:58:30.000Z
       
       0 likes, 0 repeats
       
       I am not trolling.I do think it would be good to have a system for identifying harmful stuff. It would be a nice workaround that would work today and I would definitely adopt it at https://njump.me/ because we keep getting reports from Cloudflare. I tried some things but they didn't work very well, so if you know how to do it I'm interested.However the long-term solution is paid relays, community relays, relays that only give access to friends of friends of friends, that kind of stuff.
       
 (DIR) Post #ArDXyCiWHfSyaxhkxc by 97f848adcc4c6276685fe48426de5614887c8a51ada0468cec71fba938272911@mostr.pub
       2025-02-10T23:12:23.000Z
       
       0 likes, 0 repeats
       
       OK, so thinking about it more, in light of what @0461fcbe says ... 1) Obviously the spec to use would be the LABEL spec nip-32 -- not sure why I didn't figure that out to begin with... https://github.com/nostr-protocol/nips/blob/master/32.md  2) My original idea  of "publicly publish a score for each image" is completely impossible and terrible idea... because, of course, the bad guys could actually just use the service in the reverse way that it's intended to be used! ....... Anyway, 1/2 of the problem -- running a service which produces scores -- is completely something I could do -- basically process millions of images and spit out scores for them --  but the other 1/2 ... how to let clients or relays use these scores WITHOUT also giving them a "map to all the bad stuff" at the same time...? I'm not smart enough currently to come up with a solution. It might involve something fancy involving cryptography or "zero knowledge proofs" or things that are generally out of my intellectual league.
       
 (DIR) Post #ArDXyDNznSdwfb6sbo by 0461fcbecc4c3374439932d6b8f11269ccdb7cc973ad7a50ae362db135a474dd@mostr.pub
       2025-02-10T23:18:06.000Z
       
       0 likes, 0 repeats
       
       You just don't. If somebody uploaded illegal content to your server, you delete it and report it to the NCMEC. You don't try to do anything else.I think the NCMEC has an API. The best thing to do would be to develop an integration for it so you can delete and report it at the same time.
       
 (DIR) Post #ArDXyE3TJFoukEW0G0 by 3bf0c63fcb93463407af97a5e5ee64fa883d107ef9e558472c4eb9aaaefa459d@mostr.pub
       2025-02-10T23:20:26.000Z
       
       0 likes, 0 repeats
       
       This is a complete non-solution.
       
 (DIR) Post #ArDXyEsAGmMzHSEUGu by 0461fcbecc4c3374439932d6b8f11269ccdb7cc973ad7a50ae362db135a474dd@mostr.pub
       2025-02-10T23:22:30.000Z
       
       1 likes, 0 repeats
       
       It's a solution to the liability problem. The solution to the ethical problem is to block all porn and try to drive those people away.