Post AcpP8fJQeGj6FyIn7w by zleap@qoto.org
 (DIR) More posts by zleap@qoto.org
 (DIR) Post #Aco03xQt9rvWznUQyG by simon@fedi.simonwillison.net
       2023-12-14T16:17:00Z
       
       0 likes, 1 repeats
       
       I wrote about the AI trust crisis: when companies like Dropbox and OpenAI say "we won't train models or your private data", it's increasingly clear that a lot of people simply don't believe them.https://simonwillison.net/2023/Dec/14/ai-trust-crisis/
       
 (DIR) Post #Aco0nid2X0LIP1zhHk by joelanman@hachyderm.io
       2023-12-14T16:25:00Z
       
       0 likes, 0 repeats
       
       @simon Have they ever clearly published what sources they have trained/do train on?
       
 (DIR) Post #Aco12F5sciAz96PRBo by blindcoder@toot.berlin
       2023-12-14T16:27:13Z
       
       0 likes, 0 repeats
       
       @simon That crisis started when "Privacy Policies" just outlined how much they do NOT care about the privacy of their users data privacy.
       
 (DIR) Post #Aco1GrqfKbvhOZGatk by simon@fedi.simonwillison.net
       2023-12-14T16:27:32Z
       
       0 likes, 0 repeats
       
       @joelanman No, and that's a huge problem - if they say "we don't train on X" but they won't say what they DO train on what are we meant to think?
       
 (DIR) Post #Aco1UZbTxiq6rchTF2 by hannah@posts.rat.pictures
       2023-12-14T16:30:24Z
       
       0 likes, 0 repeats
       
       @simon fwiw I also thought "phone is listening to you" was just a believable myth but I saw this recently and now I'm not so sure https://www.cmglocalsolutions.com/cmg-active-listening
       
 (DIR) Post #Aco1hOPasshluyFhaK by andreagrandi@mastodon.social
       2023-12-14T16:31:14Z
       
       0 likes, 0 repeats
       
       @simon Dropbox enabled this feature, passing users data to OpenAI, without asking their users consent. The first to loose trust is Dropbox, not OpenAI. The trust issue comes way before knowing what they will do with the data.
       
 (DIR) Post #Aco201hFYy6oyCObxY by miki@dragonscave.space
       2023-12-14T16:34:52Z
       
       0 likes, 0 repeats
       
       @simon Personal experience re: the Facebook conspiracy. The only way I found to convince people this is untrue is to show them when the same exact thing happens on linear TV. We discuss something, and 15 minutes later, the ad for that thing comes on. Making them understand that if this was Facebook instead of TV, they'd blame it on their phones listening goes a long way.
       
 (DIR) Post #Aco2HkCPMBRdunPzxw by phillmv@hachyderm.io
       2023-12-14T16:38:52Z
       
       0 likes, 0 repeats
       
       @simon @joelanman given that you can’t guarantee that training material won’t be regurgitated by the model, it’s entirely self inflicted that companies aren’t being extremely explicit about the privacy boundaries around what will and won’t get slurped up 😵
       
 (DIR) Post #Aco2SlCLyyECCPe6Cm by simon@fedi.simonwillison.net
       2023-12-14T16:40:59Z
       
       0 likes, 0 repeats
       
       @hannah I have so many questions about that - it's hard to decode what the thing is really doing from the breathless marketing copy, but it still raises SO many red flags
       
 (DIR) Post #Aco2kKssYYJDneuwBE by simon@fedi.simonwillison.net
       2023-12-14T16:42:04Z
       
       0 likes, 0 repeats
       
       @phillmv @joelanman Right, which is why it's so important that we know exactly what material gets piped into the training flow and what doesn'tThe thing I worry about most is documents pasted into ChatGPT for summarization etc - because unlike the OpenAI APIs, they DO say that they use ChatGPT data to improve their models
       
 (DIR) Post #Aco2vv7w98ydOHU6Km by jeremiah@tldr.nettime.org
       2023-12-14T16:45:13Z
       
       0 likes, 0 repeats
       
       @simon I was one of the people who canceled upon discovering this. I moved all my files yesterday. I think there's more than simply not trusting them. I mean the lack of trust is there nearly two decades of watching EULAs be the reason companies do whatever they want is a big reason for that mistrust. There's more though: AI is a climate catastrophe that we're just kind of letting happen, I don't want remix engine features on my files, OpenAI is staffed but a lot of folks with frankly weird opinions about AI, I find the parlor trick of LLMs mediocre compared to the received hype, and the presumption that I would by default want to use these feature is offensive.
       
 (DIR) Post #Aco30vzWMQ2HWEwGtU by friendlymike@androiddev.social
       2023-12-14T16:51:56Z
       
       0 likes, 0 repeats
       
       @simon @zachklipp problem is that we're now 1 more toggle and 1 more product manager wanting a promo before it becomes "click here to opt out of training"
       
 (DIR) Post #Aco37aZwN1yH8V9ArQ by MickG59@mastodonapp.uk
       2023-12-14T16:45:36Z
       
       0 likes, 0 repeats
       
       @simon A bit like Google's 'Don't be Evil'  later amended to 'Do the Right Thing' how has that worked out!
       
 (DIR) Post #Aco3Jete8uBO0E7TQO by lerudd@social.vivaldi.net
       2023-12-14T16:45:42Z
       
       0 likes, 0 repeats
       
       @simon if they simply said, "Your data is never shared with anyone, and remains E2E encrypted on our servers and only you and those you authorize can access it...and this is why there's no opt-out button here for AI processing." then that would gain some user trust. as it is, their statement is full of weasel-words that necessarily need clarification via the Learn more link, which then will dive into the legalese that likely says, "well, yeah for APPROVED purposes, we'll let them have their way with your anonymized data" -- which makes it even worse, since that means they retain full rights to do whatever they want with your data, including read and alter it and send it to someone else to do more with it.
       
 (DIR) Post #Aco3gSnVIDRX8e7XA8 by kimvanwyk@fosstodon.org
       2023-12-14T16:47:12Z
       
       0 likes, 0 repeats
       
       @simon you're spot on about the perception of a trust-breaching action being more important than the reality. I don't fully understand what the auto opted-in toggle implied on Dropbox but I now don't trust Dropbox to do the one thing I pay them for - keep my files secure and accessible only to me. Any auto opt-in option would make me just as nervous about Dropbox, but before Dropbox felt they had to keep up with AI I doubt that opting users in to *anything* would have been considered.
       
 (DIR) Post #Aco3v4NcMJ49kiyebA by js@mastodon.nl
       2023-12-14T16:48:23Z
       
       0 likes, 0 repeats
       
       @simon fully earned distrust.
       
 (DIR) Post #Aco4OFpqqZWhg96FG4 by ted@an.errant.cloud
       2023-12-14T16:59:33Z
       
       0 likes, 0 repeats
       
       @simon @hannah uhhhhhh.  This is very much not correct.
       
 (DIR) Post #Aco4cD4duWKHAVfluC by hannah@posts.rat.pictures
       2023-12-14T16:59:53Z
       
       0 likes, 0 repeats
       
       @simon "Creepy? Sure. Great for marketing? Definitely": https://www.cmglocalsolutions.com/blog/how-voice-data-works-and-how-you-can-use-it-in-your-business
       
 (DIR) Post #Aco4urzKWyXtH55Mky by Wbjwilliams@infosec.exchange
       2023-12-14T17:09:51Z
       
       0 likes, 0 repeats
       
       @simon considering how much of their models are based on the theft of copywrited material without permission, they’ve already shown themselves to be thieves.
       
 (DIR) Post #Aco57CwcRRvVkChVEu by simon@fedi.simonwillison.net
       2023-12-14T17:10:17Z
       
       0 likes, 0 repeats
       
       @hannah That "Here's how we do it:" list in that article notably does NOT talk about using audio data - it describes the same approach used by every other targeting firm, no voice information includedI think they're taking advantage of the audio tracking conspiracy theories and using it for marketing
       
 (DIR) Post #Aco6aFsCjPYVBD3Ba4 by hannah@posts.rat.pictures
       2023-12-14T17:29:47Z
       
       0 likes, 0 repeats
       
       @simon the list doesn't mention voice specifically, but i dont know how to square what you're saying with statements like "Our technology is on the cutting edge of voice data processing. We can identify buyers based on casual conversations in real time."
       
 (DIR) Post #Aco6tun2pOjjaD0mHI by astatide@hachyderm.io
       2023-12-14T17:33:36Z
       
       0 likes, 0 repeats
       
       @simon I agree with a lot of what you say, except I don't quite agree with the analogy you came up with re: the microphone, as that one IS easy to disprove (although I don't agree that they'd take a mortal blow to their reputation if they were discovered).  In this case, the data is absolutely sent to OpenAI, where in the microphone case it's never collected/sent to Meta.I don't think they'll *train* on it, necessarily, but I do NOT trust OpenAI as a company, and critically every new link...
       
 (DIR) Post #Aco76OtWj3oqQRzsq8 by julian@fietkau.social
       2023-12-14T17:34:57Z
       
       0 likes, 0 repeats
       
       @simon Lotta good and broadly agreeable thoughts! But I feel you understate how often tech companies have gotten away with lying – or more precisely, with violating prior guarantees.My go-to example is how, when Facebook introduced SMS-based 2FA, privacy experts worried that the possibility of the numbers being used for ad targeting would discourage people from securing their accounts.(1/2)
       
 (DIR) Post #Aco7qd6OEd2FoI3cn2 by krisnelson@legal.social
       2023-12-14T17:44:05Z
       
       0 likes, 0 repeats
       
       @simon This is a really excellent discussion! I'm with you right up to the very end, but I did find your closing thoughts somewhat disquieting (perhaps in a positive way): "companies need to earn our trust. How can we help them do that?" I'm just not convinced it's up to *us* to "help them ... earn our trust." Why is that my responsibility?Then again, I do think well-written gov reg could help with privacy & trust, and inasmuch the gov is *us*, then I think you might well be on to something.
       
 (DIR) Post #Aco8JoWqpU4teDLnge by SnoopJ@hachyderm.io
       2023-12-14T17:46:29Z
       
       0 likes, 0 repeats
       
       @simon a wrinkle not mentioned in your article:Even if a user does trust them not to use the data maliciously (and they have done very little to earn that trust), there's also the matter of trusting them to treat the data with appropriate security. To wit: OpenAI has already had an incident that leaked user data between sessions.It's a steep climb and I think users are largely correct in assuming the worst on the part of these orgs.
       
 (DIR) Post #Aco8feFdiopk8viAqG by billyjoebowers@mastodon.online
       2023-12-14T17:53:10Z
       
       0 likes, 0 repeats
       
       @simon Correct, I do not trust them.
       
 (DIR) Post #Aco97ZMDVjcn4b6Muu by luis_in_brief@social.coop
       2023-12-14T17:58:00Z
       
       0 likes, 0 repeats
       
       @simon this brings to mind the high-trust/low-trust culture discussion. Sure feels like in tech we've built a low-trust culture, and now we have to live in it.(Insert reaping/sowing fuck yeah/this fucking sucks meme here)
       
 (DIR) Post #Aco9SLR2T6mxyXGPFg by ncweaver@thecooltable.wtf
       2023-12-14T18:02:01Z
       
       0 likes, 0 repeats
       
       @simon One addition, it does NOT say your data will not be used to train Dropbox's customized model, just that it won't be used to train OpenAI's model.The NSF sent out a warning that reviewers MUST not allow proposals to be used as training data for a LLM.
       
 (DIR) Post #Aco9jYMJfg0IysGCWm by midgard@framapiaf.org
       2023-12-14T18:05:15Z
       
       0 likes, 0 repeats
       
       @simon"Facebook say they aren’t doing this. The risk to their reputation if they are caught in a lie is astronomical."You seem to have forgotten about stuff like https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Analytica_data_scandal?useskin=vector ?
       
 (DIR) Post #AcoBU3MEqo0WOexcqe by 990000@mstdn.social
       2023-12-14T18:25:09Z
       
       0 likes, 0 repeats
       
       @simon I'm glad you mentioned the Instagram microphone spying phenomenon. It happens to me every month or every other month. I'm definitely considering canceling my Dropbox Plus, partially because I use it a lot less, but also because of OpenAI and this default opt-in that they did.
       
 (DIR) Post #AcoBfevtUIkI7SAkkK by adriano@lile.cl
       2023-12-14T18:26:44Z
       
       0 likes, 0 repeats
       
       @simon I honestly don't know why you chose to end this piece with "How can we help them do that [earn our trust]?" when the problem is absolutely *theirs*, and *they* need to do the work to recover trust *they* have broken. Dropbox says "Your data is never used to train their internal models and is deleted from third-party servers within 30 days"Why would my data even need to be on third-party servers that aren't using it. Why would they choose that phrasing. Why do that.
       
 (DIR) Post #AcoCrJydKJZIYTZvZg by simon@fedi.simonwillison.net
       2023-12-14T18:40:16Z
       
       0 likes, 0 repeats
       
       @adriano Yeah, slightly clumsy wording. I wanted something that WE can do.The clear need here is for the AI companies to work really, really hard to regain that trust. I don't think they understand how serious the problem is, and I don't think they know what they would need to do to fix it.I was clumsily indicating that the open question we can help with is to make it clear to them what they need to be doing in order to regain our trust.
       
 (DIR) Post #AcoD2eRHabbLCcNigy by simon@fedi.simonwillison.net
       2023-12-14T18:41:43Z
       
       0 likes, 0 repeats
       
       @midgard That demonstrates my point: it was a scandal! It embarrassed Facebook greatly, and they took steps to stop it from happening again.(I know they took steps because I was developing software against Facebook's APIs at the time and over night several of the features I was using permanently stopped working.)
       
 (DIR) Post #AcoDH9TUzoXnVpKgSW by simon@fedi.simonwillison.net
       2023-12-14T18:43:25Z
       
       0 likes, 0 repeats
       
       @ncweaver Dropbox said "We will not use customer data to train AI models without consent" - without clarifying what "consent" meant.My hunch is that they were being clumsy with their wording there, but that clumsiness is itself a huge problem!
       
 (DIR) Post #AcoDUTVVpKwvPGcxH6 by simon@fedi.simonwillison.net
       2023-12-14T18:46:24Z
       
       0 likes, 0 repeats
       
       @hannah Honestly I think they're straight-up lying about using voice.
       
 (DIR) Post #AcoDyHTH8pD2lTnGM4 by jamesbritt@mastodon.social
       2023-12-14T18:49:46Z
       
       0 likes, 0 repeats
       
       @simon Dropbox made their indifference to ethics clear when they put Condoleezza Rice on their Board of Directors.
       
 (DIR) Post #AcoEHg5RqRx2bwtijY by grant_h@mastodon.social
       2023-12-14T17:22:34Z
       
       0 likes, 0 repeats
       
       @Wbjwilliams @simon A brief visit to https://haveibeentrained.com is sufficient to show the intent. All my deviantArt images are clearly labelled either CC-BY-NC or fully copyrighted - and ~150 appear there. That is a deliberate decision to ignore the metadata on the pages. I assume these datasets have been used by OpenAI. Once a company is deliberately ignoring "opt out" instructions, it seems their intent is clear?
       
 (DIR) Post #AcoEHh7bztHBowuxkG by simon@fedi.simonwillison.net
       2023-12-14T18:52:11Z
       
       0 likes, 0 repeats
       
       @grant_h @Wbjwilliams That site covers art that's been used to train Stable Diffusion and Midjourney, but it's not at all clear if that work was used for DALL-E. That's another part of the problem at all: AI companies should disclose the material they trained on.
       
 (DIR) Post #AcoEyAHvwn10RQzssy by lobotomy42@mstdn.party
       2023-12-14T19:03:54Z
       
       0 likes, 0 repeats
       
       @simon I think it really doesn't help OpenAI's case that they trained GPT on vast swathes of the web -- including copyrighted and paywalled content -- as well as (seemingly) books under copyright and user-generated content and essentially didn't tell anyone (outside of other AI researchers.) If you've already seen the company scoop up lots of content that wasn't theirs, what does it matter that they say "Well we're not doing that NOW"
       
 (DIR) Post #AcoFPaywoBX538CEwy by zilahu@mastodon.laca.dev
       2023-12-14T19:08:57Z
       
       0 likes, 0 repeats
       
       @simon because they simply lying. Introducing opt-out ai search feature in a service where people storing their private data. Shady. @lisamelton
       
 (DIR) Post #AcoGt2oMWE51iEqqe0 by Npars01@mstdn.social
       2023-12-14T19:25:24Z
       
       0 likes, 0 repeats
       
       @simon When creepy dudes in white vans offer school children walking to school a puppy or candy for "free", we know how to respond.When Google and Microsoft offer "free" email or backups or storage, you'd think we'd recognize the creepy dude, but no...
       
 (DIR) Post #AcoH5iPNgh7oCNCzpY by kristen_d@mastodon.social
       2023-12-14T19:26:13Z
       
       0 likes, 0 repeats
       
       @simon Because they’re FUCKING LIARS
       
 (DIR) Post #AcoHsYHG8voSy2hGYy by f800gecko@mastodon.online
       2023-12-14T19:36:22Z
       
       0 likes, 0 repeats
       
       @simon Because of course they will. I may be paranoid, but I’m growing convinced Apple is making its virtual keyboard shittier to compel users who don’t care for Siri’s dictation feature (sending every thought one voices to Apple’s servers for typing) to subscribe anyway.A friend commented after a recent update her typing seemed to have become worse. I’ve been noticing the same.
       
 (DIR) Post #AcoJaDeUHxsnh8wom8 by simon@fedi.simonwillison.net
       2023-12-14T19:55:51Z
       
       0 likes, 0 repeats
       
       @f800gecko Apple are actually the company I trust most on this front, because so many of their AI/machine learning features run on device - photo search for exampleApple keyboard dictation works offline - you can switch your phone into airplane mode to demonstrate that
       
 (DIR) Post #AcoJzHurZWLE9wYq36 by f800gecko@mastodon.online
       2023-12-14T19:57:24Z
       
       0 likes, 0 repeats
       
       @simon Thanks for this. I’m back to normal rage levels for the next short while.
       
 (DIR) Post #AcoLhQ5Kf4R8Bs46ts by dragoniff2@kolektiva.social
       2023-12-14T20:19:29Z
       
       0 likes, 0 repeats
       
       @simon Billion-dollar Company that made billions of dollars form selling user data: "Don't worry, we won't sell your data for this very specific purpose"Users:🤔 🤔 🤔 🤔 🤔 🤔 🤔
       
 (DIR) Post #AcoNb2NY5Tmqq0fcg4 by shekinahcancook@babka.social
       2023-12-14T20:40:43Z
       
       0 likes, 0 repeats
       
       @simon Well, duh. They've lied about everything else. They're not going to stop lying now.
       
 (DIR) Post #AcoNqZyjRkjBOCTPpg by NeadReport@social.vivaldi.net
       2023-12-14T20:42:15Z
       
       0 likes, 0 repeats
       
       @simon Let's see that audit process for third party servers AND how tech partners are "vetted".A lot of goodies can be mined in 30 days.
       
 (DIR) Post #AcoOV2irznqM7ZLfwe by simon@fedi.simonwillison.net
       2023-12-14T20:50:48Z
       
       0 likes, 0 repeats
       
       @dingemansemark I didn't think I skipped over that - I explicitly mentioned it in the article, with the screenshotHah, that page is now gone!
       
 (DIR) Post #AcoQ3FBSGwPgZIaiVk by AlgoCompSynth@ravenation.club
       2023-12-14T21:08:01Z
       
       0 likes, 0 repeats
       
       @simon Not only do I not trust them to not do that deliberately, I do not believe they understand the risks of the technologies they are deploying.
       
 (DIR) Post #AcoRqPj5xoQbPRKtnM by subterfugue@sfba.social
       2023-12-14T21:28:14Z
       
       0 likes, 0 repeats
       
       @simon I think you are naive about whether government will sue corps for privacy violations. We already know that our own gov't illegally tapped phone communications. And we know that Amazon sells data from their "ring" cameras to local police. And social media do similar.I agree with you that transparency is key, and that it not being provided is a huge problem.
       
 (DIR) Post #AcoSjy1c8xbdiCUJo8 by SynAck@corteximplant.com
       2023-12-14T21:38:22Z
       
       0 likes, 0 repeats
       
       @simon I think that there are 3 simple things that companies can do to regain some of the public's trust:Be completely transparent about where all PII data goes and what they do with it (and those ludicrous "we respect your privacy" banners are inadequate).Stop burying vague wording in EULAs that are thousands of lines long that nobody can read which give them the "but you agreed" excuse. In fact, stop using EULAs entirely.Stop automatically adding features that nobody asked for and make them opt-in by default. If a user doesn't explicitly opt-in, then that functionality should stay completely off.Stop making it hard (impossible) to opt out of these services.The public is already doing what they need to do: leaving those services when we don't like what they're doing. The only thing these companies (or their investors) understand is dollars. So if they change the service I signed up for in ways I do not like, then I'm going to go somewhere else. Legal action is a laughable remedy as there is absolutely no real accountability for any of these companies because the law is too behind to keep up with the tech issues and Big Tech lobbying ensures that it stays that way. Rebuilding trust is the responsibility of the party that broke that trust in the first place, not the party betrayed.
       
 (DIR) Post #AcoVlB9P7zVuZYOvuC by SpaceLifeForm@infosec.exchange
       2023-12-14T22:11:54Z
       
       0 likes, 0 repeats
       
       @simon They will lie until they are dissolved via bankruptcy.The Love of Money is the Root of all Evil.
       
 (DIR) Post #AcobWnDbuT5o1EtMhs by simon@fedi.simonwillison.net
       2023-12-14T23:16:43Z
       
       0 likes, 0 repeats
       
       @dingemansemark @abucci I wouldn't be at all surprised to hear it was on by default for US, off by default for EU
       
 (DIR) Post #Acopyl2ezjAgjczzua by dabreese00@mas.to
       2023-12-15T01:58:42Z
       
       0 likes, 0 repeats
       
       @simon @midgard The problem is that the abuses keep happening, regardless of any "steps" (which most of us can't easily see anyway). We've rarely, if ever, seen meaningful accountability from tech companies for these chronic, high-impact betrayals of trust. Executives and shareholders keep right on getting what they were always in it for. Users and regular people foot the bill. About trusting them now, it makes more sense to ask, not "how can we", but "why should we"?
       
 (DIR) Post #Acoqxx76j73bq6G7aC by dabreese00@mas.to
       2023-12-15T02:10:03Z
       
       0 likes, 0 repeats
       
       @simon @adriano I understand your concern for this is genuine and well meaning, but it's not clear to me that they can, or should, regain our trust. Once you've brashly flouted accountability for your betrayals enough times, at a certain point, you no longer really keep getting serious chances to mend things.
       
 (DIR) Post #AcouxO8z8uufCQ7jsG by pgroce@mastodon.social
       2023-12-15T02:54:24Z
       
       0 likes, 0 repeats
       
       @simon The tone of this piece seems to be that good faith is the reasonable presumption here, and the companies, and government, need to do more to make us feel comfortable. I agree wrt government, though you and I can probably both agree that’s not going to happen. Wrt presumption of bad faith being conspiracy theory-level irrational: https://mastodon.social/@jasonkoebler/111581365673808662
       
 (DIR) Post #Acp5EOReHAUAb6bJ6e by lerudd@social.vivaldi.net
       2023-12-15T04:49:36Z
       
       0 likes, 0 repeats
       
       @simon @ncweaver that is a really good point. 'consent' is likely buried deep in the user agreement and it's now been updated just a tich...
       
 (DIR) Post #AcpP8fJQeGj6FyIn7w by zleap@qoto.org
       2023-12-15T08:32:38Z
       
       0 likes, 0 repeats
       
       @simon I found some articles on how to stop them sharing data with Open AI
       
 (DIR) Post #AcpQyGc3pFSQfIBcTw by castarco@hachyderm.io
       2023-12-15T08:53:11Z
       
       0 likes, 0 repeats
       
       @simon I know it's super-pedantic to say this, but I think we should stop saying AI until we have one of them for real. As of today, they are "just" LLMs with some small extras.I'm writing this only because there's a lot of people who read you, so it's fair to say that you have some influence.
       
 (DIR) Post #AcpZtZETBTea2bIoe8 by simon@fedi.simonwillison.net
       2023-12-15T10:33:11Z
       
       0 likes, 0 repeats
       
       @castarco I've changed my mind about thatI used to avoid the term "AI" because it implied "intelligence" when these things are just fancy autocompleteBut... the term AI has been a field of academic research since 1956, and that field is 100% about what this stuff doesNow I think we need to reclaim the term AI from science fiction - we have the terms AGI and ASI for the sci-fi ideas of AI
       
 (DIR) Post #AcpgXrZuzdIsFRvsyO by jezdez@publicidentity.net
       2023-12-15T11:47:37Z
       
       0 likes, 0 repeats
       
       @simon @hannah Some media response to that thread: https://www.404media.co/cmg-cox-media-actually-listening-to-phones-smartspeakers-for-ads-marketing/
       
 (DIR) Post #Acpjq9O9oToMYpqL7g by bontchev@infosec.exchange
       2023-12-15T12:24:52Z
       
       0 likes, 0 repeats
       
       @simon BTW, does this affect only the paid customers? I don't have such a setting at all in my account.
       
 (DIR) Post #Acpk1kFV6j6W4CK4x6 by simon@fedi.simonwillison.net
       2023-12-15T12:27:05Z
       
       0 likes, 0 repeats
       
       @bontchev I think they've removed the setting - the link I provided to it no longer works for meMy hunch is that Dropbox are frantically changing how all of this stuff works based on the uproar
       
 (DIR) Post #Acpkbx3kxIeKDt5tpY by ed@social.opensource.org
       2023-12-15T12:32:23Z
       
       0 likes, 0 repeats
       
       @simon @castarco I came to the same conclusion while thinking about what to name the definition of Open Source for this field. AI is an old scientific discipline, well established, there is no advantage in inventing new names just because we're going through a hype cycle.
       
 (DIR) Post #AcpmuoB6OuCS5VeKRM by catsalad@infosec.exchange
       2023-12-15T12:54:06Z
       
       0 likes, 0 repeats
       
       @simon @bontchev Step 1. Train AI with accumulated customer data Step 2. Policy change stating AI trained with your dataStep 3. Change policy again if fallout (AI already trained) 🤷
       
 (DIR) Post #AcprnQ8fsYJHH8oJTU by yacc143@mastodon.social
       2023-12-15T13:53:54Z
       
       0 likes, 0 repeats
       
       @simon Yes, but how is “I don't trust big tech/big corp” related, especially to AI?To be honest, literally everything these gangsters say and announce has to be analysed with the most cynical mindset. Parsed basically like by a hostile lawyer.So Dropbox & co said "THEY" will not ....Now that raises immediately the question in my mind about sister companies, subsidiaries, 3rd parties, .... Sorry, they say they won't train "internal" models on your data. "external"?
       
 (DIR) Post #AcptZ1YUgPQusw9NbM by duckwhistle@mastodon.org.uk
       2023-12-15T14:13:28Z
       
       0 likes, 0 repeats
       
       @simon When they say that, they are effectively saying that they could do it, they've just decided not to. Even if you're happy to take them at their word, the unspoken part is that they could change their mind whenever they felt like it.They only have two duties, to make a profit for shareholders, and comply with the law. Without legal protection there should be no expectations for businesses to keep their word any longer than is convenient for them.
       
 (DIR) Post #Acq1Ter2SLhkfHmDDs by simon@fedi.simonwillison.net
       2023-12-15T15:41:01Z
       
       0 likes, 0 repeats
       
       @yuki2501 I think Dropbox are frantically changing how this works right now
       
 (DIR) Post #AcqqR4HUYk7EJlfkKe by simon@fedi.simonwillison.net
       2023-12-16T01:13:12Z
       
       0 likes, 0 repeats
       
       @bipolaron yeah, that "internal models" thing raises so many more questions than it answers!
       
 (DIR) Post #ActmPk3uQnMIyfJLiC by inakicalvo@mastodon.social
       2023-12-17T11:12:04Z
       
       0 likes, 0 repeats
       
       @simon experience has taught us not to trust them much...
       
 (DIR) Post #ActpMxTWPAAJZTeh9s by lewiscowles1986@phpc.social
       2023-12-17T11:45:35Z
       
       0 likes, 0 repeats
       
       @simon might that be helped, by not needing opt-out models, and having full provinence to begin with?
       
 (DIR) Post #Acu4tHFduaMtU824xs by simon@fedi.simonwillison.net
       2023-12-17T14:38:50Z
       
       0 likes, 0 repeats
       
       @lewiscowles1986 agreed: "There is something that the big AI labs could be doing to help here: tell us how you are training!"
       
 (DIR) Post #AcuCWUObfpG7E0s0i8 by Corb_The_Lesser@mastodon.social
       2023-12-17T16:02:56Z
       
       0 likes, 0 repeats
       
       @simon These are the same people I've never trusted to honor the opt-outs they sprinkle around the web. It's an exercise in trusting and believing people who see little dollar value in being trusted and believed.
       
 (DIR) Post #AcuaIIGuLt7YT626sa by lewiscowles1986@phpc.social
       2023-12-17T20:31:04Z
       
       0 likes, 0 repeats
       
       @simon to provide balance, I think that would have been impractical for our first few iterations.Other good news https://slashdot.org/story/23/12/17/1950238/openais-in-house-initiative-explores-stopping-an-ai-from-going-rogue---with-more-ai They want to prevent skynet 😉