Reprinted from TidBITS by permission; reuse governed by Creative Commons license BY-NC-ND 3.0. TidBITS has offered years of thoughtful commentary on Apple and Internet topics. For free email subscriptions and access to the entire TidBITS archive, visit http://www.tidbits.com/ Should You Let Claude Learn from Your Chats? Adam Engst On its blog, [1]Anthropic writes: Today, we're rolling out updates to our Consumer Terms and Privacy Policy that will help us deliver even more capable, useful AI models. We're now giving users the choice to allow their data to be used to improve Claude and strengthen our safeguards against harmful usage like scams and abuse. Adjusting your preferences is easy and can be done at any time. By participating, you'll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations. You'll also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users. On the flip side, the thought of AI companies training their models on chat transcripts sets off all sorts of privacy warning flags for many people, including me. My discomfort stems less from immediate real-world concerns than from the innumerable privacy abuses perpetrated by tech companies in pursuit of surveillance advertising dollars. Although it's theoretically possible for sensitive personal information, business details, or intellectual property (like code) absorbed into training data to leak directly in responses, that's both unlikely (quite literally, what are the odds?) and probably wouldn't happen for a year or two anyway, when the models currently in training become public. More concerning is that we don't know how AI may or may not enable future privacy abuses, and once data has been used for training, it's unlikely that it could be 'untrained' from the model. For now, it's better to err on the side of caution. I primarily use ChatGPT, where I've already turned off the option that lets OpenAI train future models on my chats. Now that Claude has the option to collect my conversations and hold onto them for five years, I'm turning that off too. Both of those are easy to disable in the settings, but Google makes it harder to turn off conversation collection for Gemini: the trick is to [2]turn off Gemini Apps Activity. I strongly recommend avoiding Meta AI, which trains on your conversations (and all public Facebook posts and Instagram photos) without allowing you to opt out, and Grok, which trains on all X/Twitter users' posts (including historical posts in otherwise inactive accounts) and Grok conversations unless you explicitly opt out. Beyond the privacy implications and poor track records of both companies, training models on often unverified, inflammatory, or misleading social media content risks perpetuating those same qualities in AI responses'a classic case of 'garbage in, garbage out.' References 1. https://www.anthropic.com/news/updates-to-our-consumer-terms 2. https://myactivity.google.com/product/gemini .