[HN Gopher] Is macOS Look Up Destined for CSAM?
___________________________________________________________________
Is macOS Look Up Destined for CSAM?
Author : ingve
Score : 102 points
Date : 2022-03-20 16:34 UTC (6 hours ago)
(HTM) web link (eclecticlight.co)
(TXT) w3m dump (eclecticlight.co)
| WinterMount223 wrote:
| What's with the obsession with CP? I agree that it's morally
| wrong and should be penalized but why is it perceived as the
| Ultimate Crime? Why is this tool supposedly for detecting CP only
| and not stolen bikes for sale, bullying through SMS, etc which
| are also criminal offenses?
| Mikeb85 wrote:
| > I agree that it's morally wrong and should be penalized but
| why is it perceived as the Ultimate Crime?
|
| Because children are trafficked and abused to create it SMH...
| Asooka wrote:
| Child abuse is a real problem and should have considerable
| resources dedicated to combating it, but focussing on banning
| images depicting child abuse does nothing to prevent a child
| from being abused. We're close to a situation where it's
| safer to abuse children than to try and find images depicting
| child abuse. I'm pretty sure that focussing on preventing
| abuse and supporting children to report abuse will do a lot
| more than sweeping the evidence it ever happened under the
| rug. Of course that would also require you to go after some
| pretty high ranking people, so it's not very good for one's
| career.
| Mikeb85 wrote:
| Police arrest child abusers and traffickers all the time...
| But as long as there's demand for CP people will create it
| (like many illecit activities), hence trying to reduce the
| demand (through making consequences for possessing it).
|
| It's funny, comments on this site regularly demonize all
| sorts of non-consentual images (revenge porn for example)
| and rightly so but CP is the ultimate non-consentual image
| - a child doesn't even understand sexuality, isn't sexually
| mature, never mind able to consent... And there's comments
| here downplaying it, borderline condoning it...
| OSWJimlo wrote:
| notRobot wrote:
| > why is it perceived as the Ultimate Crime?
|
| Because you can apparently justify any move, no matter how
| authoritarian, by saying "think of the kids"!
|
| It's politicians and governments exploitating psychology to get
| away with problematic crap.
|
| It's not the ultimate crime, it's the ultimate justification.
| userbinator wrote:
| Just like how "security" is often used in the same manner,
| but I agree that CP is a much more persuasive and emotional
| argument.
| smeeth wrote:
| Two thoughts.
|
| 1) Good, simple politics. Protecting kids from predators is
| about as cut and dry an issue as you will ever find. Harry
| Potter vs Voldemort might be a more complicated moral issue.
|
| 2) I suspect that a few very well connected activists in the
| Bay Area have made it their life's work to get CSAM tools on
| sites.
|
| Ashton Kutcher and his organization Thorn [0] are probably the
| best example of this. Thorn is an interesting example because
| it has been VERY good at making its case in the non-tech media
| e.g. [1], [2], [3] and in front of congress [4]. It should be
| said, Thorn makes technology that helps track down child
| exploitation and has had some great results, which deserve
| plaudits.
|
| [0] https://en.wikipedia.org/wiki/Thorn_(organization)
|
| [1]
| https://www.npr.org/sections/goatsandsoda/2019/04/15/7126530...
|
| [2] https://www.nytimes.com/2021/09/02/opinion/sway-kara-
| swisher...
|
| [3] https://www.washingtonpost.com/news/reliable-
| source/wp/2017/...
|
| [4] https://www.youtube.com/watch?v=HsgAq72bAoU
| novok wrote:
| Because it's a good political tool that leverages parental and
| other human instincts to protect children. Because it puts most
| people in such a thought terminating blind panic you shut down
| thought and use it as cover for your true intentions, and give
| token enforcement funding for it while you direct the majority
| of enforcement funding for your true goals to politically
| control your enemies. It's old as politics itself.
|
| It's been known for a while that this is a political technique.
| It is one of the four horsemen of the infoacopolypse [0] since
| 1988 after all. Or that "How would you like this wrapped?"
| comic by John Jonik in the year 2000 [1]. It's the next round
| of the crypto wars.
|
| [0]
| https://en.wikipedia.org/wiki/Four_Horsemen_of_the_Infocalyp...
|
| [1]
| https://www.reddit.com/r/PropagandaPosters/comments/5re9s1/h...
| oh_sigh wrote:
| CP is fairly easy to recognize if you see it I'd imagine. I'm
| sure there are some instances where an adult just looks very
| young, but there is probably a lot of CP out there with no
| potential for that.
|
| How exactly does one recognize a bike as stolen from a
| photograph?
| sircastor wrote:
| This is a misunderstanding. The goal here is not to identify
| new child pornography, based on ML trained models. This is to
| identify known child pornography, according to a hashed
| value. The hashed value is generated by an ML model.
| rzzzt wrote:
| You recognize bikes, and compare it to a database of known
| stolen bikes.
| tokumei wrote:
| CSAM only works by checking a hash. Photos of a stolen
| bike, especially ones that are sold online would probably
| have unique images taken by the thief.
| unsupp0rted wrote:
| Well it is a special category of human depravity. In prison the
| other prisoners don't go out of their way to beat and shank the
| bike thieves and cyber bullies, or even the run-of-the-mill
| murderers.
| [deleted]
| KennyBlanken wrote:
| > What's with the obsession with CP?
|
| It's been the go-to outrage generator for federal law
| enforcement and spy agencies to use to attack strong device and
| end-to-end encryption by means of legislation that requires
| backdoors our outlaws encryption that is too strong.
|
| To see why, scroll down to see the guy advocating for the death
| penalty for people involved in child porn production.
|
| If only law enforcement showed equal vigor for addressing child
| abuse in religion, whether it's raping altar boys or using the
| mouth to clean blood off a baby that has been circumcised
| (often causing syphilis outbreaks in the process.)
|
| It's almost like it's not actually about fighting child abuse,
| but about being able to snoop in your devices and
| communications.
| [deleted]
| KarlKemp wrote:
| No, this vaguely related technology has nothing to do with
| whatever you are associating with it. If Apple wants to
| surreptitiously spy on your porn collection, they will do so, and
| won't need cover.
| mdoms wrote:
| This article could help by defining what Visual Look Up is... I
| have never heard of it.
| Angostura wrote:
| Indeed - here is a nice little article that described one
| aspect of it that was posted the other day:
| https://eclecticlight.co/2022/03/16/how-good-is-montereys-vi...
| symlinkk wrote:
| Rambling post, hard to follow or understand
| nyanpasu64 wrote:
| Does macOS 12.3 and beyond phone home with details of images and
| documents you open in Preview?
| dev_tty01 wrote:
| No. Why would you think that? It goes against everything they
| have stated and the designs of the software. They are heavily
| focused on keeping all of that on device.
| nkozyra wrote:
| Well other than the proposal specifically outlined in this
| post, that is. (Pardon if I missed the sarcasm)
| nojito wrote:
| Everything is done on device. Which is different than
| others who choose to do scanning on their servers.
| tylersmith wrote:
| And if the filter thinks the image is positive for CSAM
| it sends it to Apple, correct? Otherwise there's
| literally no point.
| JimDabell wrote:
| > And if the filter thinks the image is positive for CSAM
| it sends it to Apple, correct?
|
| No. The logic was never "if there's a match the image is
| uploaded". The device never knows if there's a match.
| Under the process Apple described, an extra packet of
| data is attached to every iCloud upload. If there are
| enough matches, Apple can decode those packets to get a
| low-res thumbnail, which they then check against a second
| perceptual hash. The process doesn't work on arbitrary
| images on your device, it's specifically designed for
| iCloud uploads.
| agildehaus wrote:
| Why can't they just scan iCloud uploads on-server then?
| Why does anything need to be done on-device?
| xanaxagoras wrote:
| The theory is they were going to use this tech to finally
| enable E2EE iCloud Photos and reactionary privacy
| absolutist psychopaths who didn't understand how it works
| -- such as myself -- made a big ruckus and spoiled
| everything.
|
| Also at the point that the image hash matches, Apple
| thinks it is CSAM (and it probably is). Doing it locally
| lets them avoid storing it, which they definitely do not
| want to do.
| JimDabell wrote:
| This scheme is obviously designed to work when Apple has
| no direct access to the photos. Because of this, there is
| a lot of speculation that Apple plans on making iCloud
| photos encrypted. This scheme would continue to work in
| that situation, whereas the on-server approach would
| fail. However that's just speculation, Apple haven't
| announced anything.
| modeless wrote:
| That speculation is pretty out there considering that
| there have been no barriers to them doing true end-to-end
| encryption of iMessage backups, but they have chosen not
| to for many years despite marketing iMessage as "end-to-
| end" encrypted. Reportedly at the direct request of the
| FBI. https://www.reuters.com/article/us-apple-fbi-icloud-
| exclusiv...
| Spooky23 wrote:
| They almost certainly do. Every major provider does, and
| even some enterprises.
|
| The whole point of the CSAM stuff was that it would allow
| for end to end encryption while not turning Apple's
| ecosystem into preferred tool of child pornographers.
|
| Apple poorly communicated the feature, then the EFF put
| out a deliberately misguided written hitpiece that
| conflated parental controls with CSAM, and started an
| online freak out. The "privacy activists" won, and your
| data is sitting on Apple servers with Apple's managed
| encryption keys outside of your control today.
| sebzim4500 wrote:
| I think that's a pretty bad summary of the concerns that
| were raised. Sure they are scanning your files on icloud,
| but there is a 100% reliable way to prevent that: just
| don't upload them.
|
| In their proposal they would scan your files on device,
| which is fundamentally different. Initially they would
| not run the scanning when icloud upload was disabled but
| how long would that last for?
| xanaxagoras wrote:
| So in reply to your parent, the answer is yes. It sends a
| low resolution copy of the image to Apple with extra
| steps.
| JimDabell wrote:
| No. Look at the context of this discussion. Somebody
| starts off by asking:
|
| > Does macOS 12.3 and beyond phone home with details of
| images and documents you open in Preview?
|
| There is no similar context with Apple's previous CSAM
| scheme. The device is unable to check for a match and
| then upload the photo if there's a match. The scheme only
| works because it operates on iCloud uploads.
| xanaxagoras wrote:
| That's a fair point. In my mind I was immediately
| transported to the CSAM/NeuralHash debate from last year.
| I will slow down.
| nyanpasu64 wrote:
| I tested OCR with Wi-Fi disabled and it still functions. Is
| Visual Look Up (and Live Text) purely offline, phoning home
| with CSAM reports, or an offline preview of future
| technology which phones home with CSAM reports?
| devwastaken wrote:
| By definition information must phone home somehow. How else
| is apple's spy department going to know if there's a "visual
| match"? Local scanning doesn't mean anything, it's an excuse
| to impliment the feature that will be changed later on
| anyways. That's how the slippery slope of precedent works.
| xanaxagoras wrote:
| I don't think it's far fetched at all, that they'd do that
| without mentioning it. It certainly phones home _when_ you
| open Preview [1], what 's another little ping when you're
| looking at CSAM or whatever else they've been instructed to
| look for? The recent debacle made it clear enough to me that
| the their privacy reputation is little more than carefully
| curated marketing, and they're likely under tremendous
| pressure from lawless alphabet agencies to ramp up
| surveillance. I wouldn't put anything past a 3 Trillion
| dollar company, that's quite an empire to protect.
|
| [1] https://mspoweruser.com/macos-big-sur-has-its-own-
| telemetry-...
| aunty_helen wrote:
| There is some reprieve by using a good firewall or even an
| off device firewall.
|
| However with many of these services if you try to kill
| them, they come back. If you delete them sometimes it will
| literally break your OS.
|
| Example, if you remove the ocsp daemon, you can't start any
| program on your computer.
| hedgehog wrote:
| As far as I know it's purely local search. I'm guessing this is
| part of the development arc towards AR applications but in the
| near term solves the problem of being able to search Photos for
| "birthday party" and hopefully get something sensible out.
| noasaservice wrote:
| I'm fine with child (prepubescent) rapists whom are adults to be
| sentenced to death. If you were an accomplice to that, as a
| videographer or similar, I'm also OK with death.
|
| (There's a really weird social area from 13-18, with the
| weirdness and illegality going away up at 18. Stuff in this
| realm, especially around 2 similar ages, gets very stupid. This
| is where you can get 2 16yo's sexting and being charged with CSAM
| of their own body. I'm avoiding this in this post.)
|
| But what does this CSAM scanner do? It only catches already-
| produced pictures of CSAM. In other words, it's evidence of said
| crime. In no other area of criminal law is there a law against
| said evidence itself. And yes, given the statutory nature of
| these images (possession is criminal, even if you didnt' put them
| there), I'm not at all comfortable in charging people for simple
| possession.
|
| Even if they have urges of liking age-inappropriate pornography
| and "CSAM", as long as they're not doing any physical actions of
| harming humans, I'd much rather them do so in their own bedroom
| alone.
|
| Nor do I buy into the gateway theory that CSAM leads to
| production of CSAM by raping children. This smacks to me of the
| DARE drug propaganda and gateway theory (which is complete
| bullshit).
|
| And, we also already have harder situations that have been deemed
| legal: SCOTUS stated that Japanese Manga Hentai featuring
| schoolgirls (obviously under 18, sometimes by quite a lot), are
| completely and 100% legal. Again, SCOTUS foscused on 1fa and the
| fact that no children were harmed in its production.
|
| And that leads to what just happened a few days ago. With the
| Zelenskyy (badly done) deepfake, when can we expect 18yr women
| with very petite bodies, being deepfaked into 10-13 year olds? In
| those cases, we could attest that everyone in the production is
| of legal age and provided ongoing consent. Will this fall under
| the same as Hentai?
|
| Tl;Dr: I'm for the criminal legalization of CSAM. I'm for death
| penalty for child rapists/child sexual assault. But this, I can
| see going very very bad, in easily overscoping CSAM to the "cause
| of the day".
| KennyBlanken wrote:
| > I'm fine with child (prepubescent) rapists whom are adults to
| be sentenced to death. If you were an accomplice to that, as a
| videographer or similar, I'm also OK with death.
|
| The reliability of the criminal justice system, particularly in
| the US, is abhorrent. There's a long history of false
| convictions, particularly affecting people in minority
| outgroups and mentally disabled; we've executed adults who were
| so mentally incapacitated they were below a 10 year old in
| terms of mental capacity. The death penalty is highly immoral.
|
| There are tens of thousands of black men still in jail because
| they were basically the most convenient way for a police
| department to "solve" the murder or rape of a white woman and
| help their case clearance rates. Police, prosecutors, and
| forensic "experts" were complicit. "Hair analysis" is just one
| example of the pseudo-science nonsense.
|
| In Boston, a forensic chemist falsified thousands of test
| results and somehow this escaped notice despite her having a
| productivity level that was far and above virtually any other
| forensic chemist.
|
| Or, if you're not exceedingly gullible: her supervisors
| obviously knew what she was doing and didn't care, because she
| made their lab look great and prosecutors got lots of open-and-
| shut cases.
| selimthegrim wrote:
| Are you a fan of giving said rapists the incentive to murder
| too, making prosecuting them that much harder?
| CharlesW wrote:
| > _But what does this CSAM scanner do? It only catches already-
| produced pictures of CSAM._
|
| You say this as if it's bad to identify people who are
| distributing or collecting known child pornography. Are you
| recommending that companies implement technologies which go
| beyond this by not depending on a corpus of existing materials?
| buildbuildbuild wrote:
| How does someone truly test how this feature is being used
| without possessing illegal content? This is a nearly-impossible
| area to research. Frightening.
|
| (edit: I'm of course referring to possessing anti-Putin memes)
| (sarcasm)
| [deleted]
| chockchocschoir wrote:
| You don't have to test it against anti-Putin memes to see if it
| would work for anti-Putin memes. Algoritm would be something
| like:
|
| 1. Have image
|
| 2. Get hash of image
|
| 3. Get another hash from another similar image
|
| 4. Compare hashes
|
| The images themselves can be of whatever to see if it works as
| expected, they don't have to contain anti-Putin memes.
| willcipriano wrote:
| Come up with something like:
|
| X51!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-CHILD-ABUSE-CONTENT-
| TEST-FILE!$H+H*
|
| https://en.m.wikipedia.org/wiki/EICAR_test_file
| buildbuildbuild wrote:
| I thought about this, but you're still stuck trusting the
| implementation unless you test with actual illegal data,
| which is often criminal and immoral to obtain.
|
| Example: How does a researcher test whether algorithmically-
| classified illegal imagery stored on user devices is being
| scanned and reported home to Apple's servers, and what those
| bounds of AI-classified criminality are? (presumably with
| respect to what is illegal in the user's jurisdiction)
|
| Testing by using a test phrase, like in a spam context, is
| inadequate here as a scanning system can trivially be
| architected to pass those publicly-known tests, while still
| overreaching into people's personal files and likely
| miscategorizing content and intent.
|
| If a user connects via a VPN to Russia for whatever reasons,
| does their personal content start getting reported to that
| country's law enforcement by their notion of what is illegal?
|
| Parents often have all sorts of sensitive photos around which
| are not held with exploitative intent. "Computer says
| arrest."
| Brian_K_White wrote:
| I always imagine aliens hearing about something like this
| and being stunned.
|
| "How can data be illegal?"
|
| "There are bad things, but how can you decide what is bad
| and show how it's bad without examining and discussing it?"
|
| You can only go so far merely alluding to things. Somewhere
| the rubber has to meet the road and you have to have
| concrete data and examples of anything you need to study or
| make any sort of tools or policy about.
|
| It's like parents not talking to kids about sex. You can
| avoid it most of the time because decorum, but if you take
| that to it's extreme you have just made your child both
| helpless and dangerous through ignorance.
|
| Somewhere along the way, you have to explicitly wallow
| directly in the mess of stuff you seek to avoid most of the
| time. That "seek to avoid" can only ever be "most of the
| time". It's insane and counter-productive to try to see
| that "most of the time" as an incomplete job and improve
| that to 100%.
|
| I guess in this case there will eventually be some sort of
| approved certified group. A child porn researcher or
| investigator license. Cool. Cops with special powers never
| abuse them, and inhibiting study to a select few has always
| yielded the best results for any subject, and a dozen
| approved good guys can easily stay ahead of the world of
| bad guys.
| _fat_santa wrote:
| > Example: How does a researcher test whether
| algorithmically-classified illegal imagery stored on user
| devices is being scanned and reported home to Apple's
| servers, and what those bounds of AI-classified criminality
| are? (presumably with respect to what is illegal in the
| user's jurisdiction)
|
| I'm not an expert in AI so this might be totally off base
| but I feel like you would be able to use an "intersection"
| of sorts for this type of detection. You detect children
| and pornography, the children portion trains it for age
| recognition and the porn portion trains it to see sexual
| acts. Slap those two together and you've got CSAM
| detection.
| jdavis703 wrote:
| Possessing CSAM if it's for abuse-prevention is not
| immoral, regardless of what the law says. Saying otherwise
| is a slippery slope to saying judges, jurors and evidence
| custodians are also immoral. In fact if possession is so
| immoral, CSAM trials shouldn't even have visual evidence.
| We should just trust the prosecutor.
| Hello71 wrote:
| while i vaguely agree with the idea behind this comment,
| this explanation is particularly poor. by that logic, it
| should be allowed to kill people to prevent murder. it
| is, in fact, allowed to kill people to prevent murder,
| but only in specific legally-prescribed circumstances.
| typically, only specific people are allowed to kill, and
| only to prevent an immediate murder. the same applies for
| child porn: it is allowed to have child porn to prevent
| child porn, but only under certain legally-prescribed
| circumstances.
| mike_d wrote:
| Research has to be done in partnership with the NCMEC, which in
| turn partners with the Department of Justice to run the
| database of known CSAM material.
| version_five wrote:
| By test it, do you mean see if the police show up at your door?
| If you know how it works, you just need a list of hashes and a
| way to find a collision which I believe exists.
|
| Otherwise, you're really just highlighting the problem with all
| closed source software, you don't really have a way to check
| what it does so you have to trust the vendor.
| rootusrootus wrote:
| We already know that a hash collision doesn't get far enough
| to involve police showing up at your door, so a full test
| would take something more substantial.
| perihelions wrote:
| Obviously, _definitionally_ , it's impossible to verify that
| server-side logic isn't doing something evil. (Local
| homeomorphic protocols count, when the secret logic is imported
| from remote servers).
|
| This is one reason FOSS is actually-important and actually-
| relevant. Isn't it valid to know exactly what your personal
| computer is doing, to be able to trust your own possessions?
| Richard Stallman was *never* crazy; his understanding of these
| issues is so cynical as to be shrill and off-putting, _but that
| 's well-calibrated to the severity of the issues at stake_.
|
| You joke about anti-Putin memes. Here's a thought for well-
| calibrated cynics: Apple solemnly swears its hashes are
| attested by at least two independent countries. Russia and
| Belarus are two independent countries.
| azinman2 wrote:
| You mean one of the countries sales of devices just stopped
| in? And the other already was announced to be a US org? And
| you need the intersection of both?
| perihelions wrote:
| - _" And the other already was announced to be a US org?"_
|
| Then one rogue employee in a US org could be sufficient to
| get selective root to every Apple device everywhere? That's
| easy for a nation-state adversary. Here's demonstrated
| examples: MBS had US-based moles in Twitter corporate
| spying on Khashoggi [0], and Xi had Chinese-based Zoom
| employees spying on dissidents in America [1].
|
| [0] https://www.npr.org/2019/11/06/777098293/2-former-
| twitter-em...
|
| [1] https://www.justice.gov/opa/pr/china-based-executive-
| us-tele...
|
| That second example is topical: the Chinese state used
| their Zoom assets to attempt to frame Americans for CSAM
| possession.
|
| - _" As detailed in the complaint, Jin's co-conspirators
| created fake email accounts and Company-1 accounts in the
| names of others, including PRC political dissidents, to
| fabricate evidence that the hosts of and participants in
| the meetings to commemorate the Tiananmen Square massacre
| were supporting terrorist organizations, inciting violence
| or distributing child pornography. The fabricated evidence
| falsely asserted that the meetings included discussions of
| child abuse or exploitation, terrorism, racism or
| incitements to violence, and sometimes included screenshots
| of the purported participants' user profiles featuring, for
| example, a masked person holding a flag resembling that of
| the Islamic State terrorist group."_
| azinman2 wrote:
| You need the intersection of both, which this
| hypothetical doesn't account for.
|
| In terms of planting, it's much easier to do that already
| across the many, many cloud services that secretly scan
| on the backend. Going a whole weird route just to get
| images into a hash database, and then the matching images
| onto the device, that then get independent human
| verification seems totally unnecessary if you're a state
| agent. Why do something so complicated when there are
| easier routes to go?
| bitwize wrote:
| The FBI also has a way of "discovering" CSAM on the
| computers of uncooperative informants/suspects.
|
| https://www.pilotonline.com/nation-
| world/article_b02c37d2-ca...
| User23 wrote:
| What's the right number of lives to destroy over false positives
| from an algorithm? Is it some number other than zero? Why or why
| not?
| amelius wrote:
| You can ask the same question about self-driving cars.
| omginternets wrote:
| Self-driving cars have the potential to reduce serious harm
| by a significant margin. Are you saying the same is true with
| Apple's CSAM-detection measures?
|
| If so, how is curbing CSAM consumption going to prevent
| children from being raped, exactly? And I do mean _exactly_.
| The only arguments I have heard thus far appeal to some vague
| link between producers and consumers, predicated on the idea
| that CSAM producers are doing it for the celebrity
| /notoriety, or for financial profit. Both of these claims are
| highly suspect, and seem to rest on a confusion between
| trading CSAM online and paying traffickers for sex with
| children.
|
| You may be correct, but it's going to take more than a
| superficial comparison to convince anyone.
| passivate wrote:
| >You may be correct, but it's going to take more than a
| superficial comparison to convince anyone.
|
| Well you just hand-waved the idea the self-driving cars can
| reduce harm by a "significant" margin - based on what
| exactly?
| omginternets wrote:
| If that isn't true, then yes, we should seriously
| consider abandoning the idea of self-driving cars.
|
| I don't understand, though. Are you saying that this is a
| reason to accept Apple's CSAM-detection?
| passivate wrote:
| Oh, I'm with you on the CSAM. I just wanted clarity about
| the self-driving cars, its a pet peeve of mine.
| mejutoco wrote:
| > If that isn't true, then yes, we should seriously
| consider abandoning the idea of self-driving cars.
|
| I wanted to share a (I found) controversial thought:
| Self-driving cars are the US response to trains. No taxes
| for railways are palatable, but private vehicles on
| special roads and profit to be made, great for the car
| industry. That, IMO, is what drives (pun intended) self-
| driven cars.
| mdoms wrote:
| 1.35 million deaths per year caused by car accidents.
| That doesn't even account for the people who survive, but
| are maimed.
| passivate wrote:
| Self-driving cars are unproven tech. Just pointing to the
| harm that humans contribute towards doesn't mean much.
| Soldiers also accidentally kill innocent civilians - are
| you in favor of AI-drones and AI-soldiers??
| sebzim4500 wrote:
| Not the guy you are talking to, but I would 100% be in
| favour of AI-drones if it was shown they made fewer
| mistakes than human operators.
| passivate wrote:
| Sure, we all want fewer people harmed in this world. With
| regards to drones/solders - its a complicated topic, and
| needs a lot of discussion so I don't mean to be flippant
| about it as I might have come across. I was merely making
| a point that eliminating humans just to reduce the
| mistakes they cause ignores the benefits that humans
| bring - e.g. in this case - Refusing immoral orders,
| refusing to harm children or non-combatants, exercising
| judgement during war, etc.
| version_five wrote:
| This is ignorant. There is lots of information about the
| safety of human driven cars, it is not a high bar to improve
| upon, and can be verified. The technology in question is
| introducing a new form of potentially ruinous statistical
| surveillance that didn't exist before.
| kaba0 wrote:
| Do you mean the terribly done statistics that collect data
| from uneventful motorways' miles, while people
| automatically switch back to manual driving when a
| problematic situation arises, effectively filtering out any
| interesting data?
| KarlKemp wrote:
| You're ignoring the Trolley Problem of it all: is it moral
| to knowingly let uninvolved person X die if it saves the
| lives of Y and Z?
|
| Fortunately, the policy choice at issue here isn't one
| where there is definitive harm on the track, just risks
| that can be compared.
| mejutoco wrote:
| IMO It is more: is it immoral to let person X die
| justifying it with an unreasonable fetish for tech and
| its unrealized potential?
|
| I am only half kidding :)
| passivate wrote:
| >There is lots of information about the safety of human
| driven cars, it is not a high bar to improve upon, and can
| be verified.
|
| That doesn't make sense. The 'average' driver gets into
| millions of accidents. I don't want a slight above average
| driver driving my family around, and I don't want slightly
| above average drivers around me - especially those who I
| can't communicate with - by honking, by shouting to get
| their attention, and algorithms who have no fear of their
| own life, etc. All software has bugs and I don't want my
| safety contingent on developers making mistakes. The self
| driving car must be orders of magnitude above the BEST
| human driver, and there must be punitive damages in place
| as a deterrent, to compensate aggrieved parties, etc. At
| present, self-driving cars are unproven dangerous
| technology that is being rightfully scrutinized.
| sebzim4500 wrote:
| Currently you are surrounded by average drivers. Why are
| you worried about them being replaced with slightly above
| average drivers?
| passivate wrote:
| No, I am not. I haven't gotten into an accident, nor have
| I witnessed one that resulted in a fatality or any major
| injury.
|
| The distribution of skills amongst drivers isn't
| geographically even , nor is it static.
| Schiendelman wrote:
| What makes you think it's not geographically even (at
| least in a country like the US)?
| bastawhiz wrote:
| What measure are you using for "slightly above average"
| that also equals "unproven dangerous"? If the above
| average technology is dangerous, surely the average
| driver is even more dangerous. Otherwise it's not really
| above average, no? I.e., how can you make fewer errors
| and be a measurably better driver but still be worse than
| the thing that you're measurably better than?
|
| Either you're making this argument in bad faith because
| you just don't like self driving cars, or you don't
| believe in statistics as a concept.
| ggreer wrote:
| Self-driving cars substitute for human-driven cars, which
| currently kill over a million people a year. If the first
| mass-adopted self-driving cars have half the fatality rate
| per mile of human-driven cars, then slowing their rollout by
| a day causes 1,800 deaths. Current prototype self-driving
| vehicles already have a lower fatality rate per mile than
| human-driven vehicles. Obviously this isn't an apples-to-
| apples comparison since current self-driving cars are
| constrained to certain locations and weather conditions, but
| if the goal is to minimize deaths, then we should be more
| gung-ho about this technology than we currently are.
|
| In contrast, CSAM scanning substitutes for... I'm not sure
| what. In addition to the risk of false positives, there's
| also the risk that the scanning technology will be used for
| other purposes in the future. I could easily see governments
| forcing Apple to scan everyone's hard drives for hate speech,
| 3d models of prohibited objects (such as gun parts), or
| communications sympathetic to certain groups. Once that door
| is cracked open, there is no closing it.
| simion314 wrote:
| >Obviously this isn't an apples-to-apples comparison since
| current self-driving cars are constrained to certain
| locations...
|
| It is much more worse, current so called self driving cars
| have human drivers that intervene most of the time and
| saved the idiot AI from crashes but Elon will not count
| this near crash as an actual crash.
|
| >but if the goal is to minimize deaths, then we should be
| more gung-ho about this technology than we currently are.
|
| We should maybe try to maybe to also do the obvious quick
| fixes at the same time?
|
| it would be much cheaper instead of forcing AI cars on
| people to force say a drunk/tired/talking on the phone
| detector , enforce better driving tests before giving
| licenses to drivers, put a tax for vehicle mass toe promote
| less heavy cars, enforce speed limits with tech. Do you
| think Bob will prefer to be forced to buy an expensive self
| driving car to reduce the car crashes stats or better to
| buy a safety device(black box) that he must install in the
| car.
| cuteboy19 wrote:
| If the AI and the Human both try to correct each other's
| mistakes wouldn't that make a significantly better
| system?
| cmckn wrote:
| What's the right number of lives to destroy from CSAM? There's
| a middle ground between doing nothing and totalitarianism.
|
| "Destroyed lives" from false positives are at this point
| hypothetical. Child abuse is not. It's fair to be concerned
| about false positives and ensure the system handles such
| failures appropriately. It's also fair to directly intervene in
| the widespread circulation of CSAM.
| creata wrote:
| > "Destroyed lives" from false positives are at this point
| hypothetical. Child abuse is not.
|
| The idea that all of this effort (and all of the direct
| discomfort inflicted on Apple users) will do anything to stop
| child abuse is just as hypothetical.
| vetinari wrote:
| There's no really a middle ground; the right number is 0.
|
| _It is better that ten guilty persons escape than that one
| innocent suffer._ -- Blackstone 's ratio
| account-5 wrote:
| Or in this situation more like:
|
| _It is better that ten children get sexually abused than
| one innocent person comes under suspicion_
|
| /s
| omginternets wrote:
| This isn't combatting abuse because producing CSAM !=
| consuming CSAM. It's far better that 10 people beat off
| to children than have one innocent person come under
| suspicion.
| account-5 wrote:
| I disagree. Suspicion is not the same as prosecuted, and
| every time someone "beats off" to an image of child
| sexual abuse, that child is re-victimised; every time.
|
| You'd rather 10 children be victimised than 1 person
| falls under suspicion? Ok...
| umanwizard wrote:
| There's a reason he said ten and not a million. This
| argument is absurd when taken to its maximalist conclusion.
|
| Whether a rational person would accept chance X of
| wrongfully being convicted of a crime to decrease the
| chance of being a victim of crime by Y obviously depends on
| the values of X and Y.
| logicchains wrote:
| >What's the right number of lives to destroy from CSAM?
|
| Is there any evidence such things actually reduce the
| production of CSAM? Or is it like the war on drugs where drug
| production is as high as it's ever been.
| kelnos wrote:
| This is certainly a matter of opinion, but mine is that I
| would rather let an arbitrary number of criminals go free
| than jail (or ruin the life of) even one innocent person.
| devwastaken wrote:
| The middle ground is 0. The idea of equal jailed innocents to
| jailed criminals is mind numbing leaps of logic that is
| against the very ethos the united states was founded on.
| omginternets wrote:
| I understand the sentiment and even share it to some
| extent, but we should really be arguing against the
| strongest possible version of claim, which is as follows:
| signal-detection theory tells us that there is a direct
| relationship between the number of correct-detections and
| false-alarms, and the only way to achieve 0 false-alarms is
| not to label _anything_ a hit. In the present case, this
| means not prosecuting anyone, ever. That 's probably not a
| solution you're happy with.
|
| Therefore, the argument is one of degree. I agree with you
| that Apple's CSAM-detection is going too far, and this is
| what we should be articulating. Chanting "not even one" is
| not particularly convincing, nor sensible.
| jchw wrote:
| From this mechanism? Zero. And if even if it _was_ zero
| anyways, these measures are not justified. There are plenty
| of other ways to catch out criminals that doesn't involve
| dubious phoning home. Devices that treat their owners like
| potential criminals are nothing more than government
| informants.
|
| Also... The goal of law enforcement is not to vengefully
| destroy the lives of people who commit crime, but that's a
| whole different can of tuna. Still worth noting, because it
| hints at a larger problem about how we approach heinous
| crimes.
| avazhi wrote:
| CSAM is, by definition, photography or videography of
| something that has already happened. Therefore, quite
| literally, doing absolutely nothing about CSAM itself would
| result in no harm to any child, as the harm has already
| occurred.
|
| Now you're probably going to then cry about incentivising or
| normalising CSAM - but that's a different argument. And if
| you then try to argue that the normalisation of CSAM would
| somehow encourage people to abuse children, well then you're
| really off into the zero-evidence weeds. Go look at porn
| research (the actual Google Scholar/JSTOR/Elsevier kind), and
| you'll see that almost everybody who looks at porn neither
| wants to nor would actually do what they see in porn, if they
| were given the opportunity. Surprise, surprise, most people
| wouldn't actually get gangbanged/bukkake'd/fucked by their
| sibling, mom, dad, grandpa/pooped on by their next door
| bespectacled red-headed neighbour, etc.
|
| Nor is there any evidence that inadvertently coming across
| CSAM turns people into pedophiles (news flash: pedos were
| turned on by kids long before they were ever exposed to CSAM
| on the internet), and porn itself is almost invariably used
| for fantasy or as something wholly unrealistic that people
| get off to precisely because it's unrealistic. Even though it
| might be unsavoury to do so, we could follow this reasoning
| to its extreme but undoubtedly true conclusion and state that
| there are individuals who get off to CSAM notwithstanding
| that they would never themselves abuse children.
|
| So to recap, 0 children would be saved from harm because the
| CSAM itself is ex post facto; it wouldn't de-incentivise CSAM
| because the demand and markets for CSAM and pedophilia
| existed long before the internet was a thing, and pedophiles
| will find avenues around dumbass implementations like Apple's
| scanning (TOR, anyone? not using an iPhone/Mac?); and,
| finally, just because somebody looks at CSAM doesn't mean
| they're an actual pedo or would ever harm children
| themselves. The fact that possession of such material is
| illegal is not to the point - Apple is not the police, and
| the police and other executive agencies need warrants for
| this kind of thing (in common law countries this notion is
| more than 400 years old).
|
| Meanwhile, we know the false positive risks are not
| insignificant - look at the white papers yourself, or just
| look at the numbers that smart people have crunched. The best
| part is that even though Apple says the false positive rate
| is 1 in a trillion accounts, people's photo libraries are
| exactly the sort of thing you can't extrapolate
| statistically. Maybe your Aunt June really likes photos of
| her nephews in the pool, and she's got a library with 300
| photos of her nephews half naked and swimming when they
| visited her last year. Apple has no fucking clue whether they
| would or would not trigger its scanner, because it currently
| does not have access to Aunt June's unique photos to test vs
| its database. Apple quite literally doesn't know what the
| fuck will happen. I and many others find that abhorrent when
| you consider the effect that even the mere accusation of
| pedophilia has on a person's life. And that isn't even to
| start the discussion of what kind of precedent on-device
| scanning would set for other subjects (political and
| religious dissent, for example - if not in America then in
| places like China).
|
| Apple and everybody else can fuck right off with this
| Orwellian shit.
| cmckn wrote:
| I've worked jobs in which I was exposed to this content
| regularly. It's disturbing, sometimes extremely so. Just
| because someone does not abuse a child after viewing this
| content does not mean the content causes no harm to either
| individuals or our society at large. I don't want to live
| in a world where CSAM is tolerated in order to keep
| pedophiles satiated.
| omginternets wrote:
| [Redacted because avazhi said it better.]
| avazhi wrote:
| First you couched your position as being about the
| children and harm to children; now you're talking about -
| as best I can tell - psychological harm to society writ
| large which you're asserting would occur based on your
| own experience. The part about CSAM being tolerated in
| order to keep pedos satiated seems like a non sequitur
| but honestly I don't really understand what you're trying
| to say, so... it doesn't sound like a very good faith
| discussion to me.
|
| I do hope you get whatever support you need for something
| that's apparently affected you. Take care.
| User23 wrote:
| > Therefore, quite literally, doing absolutely nothing
| about CSAM itself would result in no harm to any child, as
| the harm has already occurred.
|
| By this "moral" reasoning you're also fully supportive of
| revenge porn.
| supramouse wrote:
| Is this opinion or backed with something?
| rootusrootus wrote:
| > the harm has already occurred
|
| Circulating images of a minor child engaged in sexual abuse
| do not constitute an ongoing harm to that child? That's a
| fascinating viewpoint.
|
| > Apple and everybody else can fuck right off with this
| Orwellian shit.
|
| Right along with people who think child abuse images should
| be okay to keep as long as you aren't the one who made
| them.
| charcircuit wrote:
| The problem of having porn posted of yourself is not
| something that applies only to minors. It can happen with
| any age.
| avazhi wrote:
| First, no, I don't think continued circulation ex post is
| comparable to the harm that occurs at the time the abuse
| physically occurs. My own view is that whatever feelings
| flow from knowing that the images are 'circulating' isn't
| harm at all. Less personally, lingering negative effects
| from some event in the form of flashbacks or unpleasant
| memories are not new instances of harm as a matter of law
| (for whatever that's worth), and I think it goes beyond
| straining common sense to use the term 'harm' in that
| way.
|
| But let's assume you're right.
|
| You think that pedophiles won't find ways to share
| content even if every tech company in the world
| implemented this? You think pedophiles don't and won't
| have terabytes of child porn backed up on hard disks
| around the world that will be distributed and circulate
| for the next millennium and beyond, even if it has to be
| carried around on USBs or burned to CDs (which aren't
| exactly amenable to CSAM scanning), and then saved to
| offline computers? Put another way, even if the internet
| shut down tomorrow, plenty of pedophiles around the world
| would continue jacking off to those images and sharing
| them with their buddies - you don't need the internet for
| that.
|
| Further, even if you could convince me that it's harmful
| in the sense that I understand the word, I'm not sure I'd
| ever be persuaded that the amount of harm could be
| sufficient to outweigh the harms that would result from
| the scanning itself.
|
| Happy to listen, though.
| bitwize wrote:
| > Circulating images of a minor child engaged in sexual
| abuse do not constitute an ongoing harm to that child?
|
| When you do it, it does. When law enforcement does it,
| apparently not...
| WinterMount223 wrote:
| There are levels of harm. It's not boolean.
| [deleted]
| kayodelycaon wrote:
| Okay... iOS has had on device image recognition since 2016.
| [deleted]
| lilyball wrote:
| Recognizing "this is a cat" and "this is a specific painting of
| a cat" are different challenges though.
___________________________________________________________________
(page generated 2022-03-20 23:00 UTC)