Posts by tedted@hachyderm.io
 (DIR) Post #ASVZ3CVEgBPYXQc76W by tedted@hachyderm.io
       2023-02-09T14:33:54Z
       
       0 likes, 0 repeats
       
       @filippo The knee-jerk reaction is sad. At the same time, seeing a proposal for telemetry in an entirely new context, without mentioning anything about serious privacy-enhancing techniques with formal guarantees… That's pretty disappointing, especially coming from Google.Your comment prompted me to post on the GitHub issue, though I don't know how optimistic I am that this is more useful than shouting into a void: https://github.com/golang/go/discussions/58409?sort=top#discussioncomment-4919511
       
 (DIR) Post #ASVZ3FLm5ocrMxD6H2 by tedted@hachyderm.io
       2023-02-09T14:56:52Z
       
       0 likes, 0 repeats
       
       @filippo From a privacy perspective, the entire story boils down to "we are removing identifiers" and "we are sampling the data". Like, c'mon. It's 2023, and you have some of the best experts in the world on your payroll for this kind of stuff.
       
 (DIR) Post #AT2ha7dF9WeYxw8YjI by tedted@hachyderm.io
       2023-02-25T15:34:18Z
       
       1 likes, 1 repeats
       
       @filippo @sophieschmieg 2) doesn't make sense to me. If an attacker weaponizing correlations between data points is part of your threat model, that's a problem regardless of whether you use DP. DP doesn't make that aspect worse; in fact it makes it much better (since you can measure it).The "simple is better" argument also puzzles me a little. Sending data in cleartext is simpler than encrypting it, most people don't understand encryption, but we encrypt data in transit anyway…
       
 (DIR) Post #AT2w2jyxYkd5NDpqPA by tedted@hachyderm.io
       2023-02-25T18:12:16Z
       
       1 likes, 0 repeats
       
       @filippo @sophieschmieg Under certain adversarial models, taking into account correlations between multiple data points degrades formal privacy guarantees, thought it does so gracefully: you can still quantify them.But taking into account correlations between data points is typically not done, for the "simple" (but very subtle and widely misunderstood) reason that doing so typically prevents learning anything of value about the data.Longer technical explanation: https://oaklandsok.github.io/papers/tschantz2020.pdf
       
 (DIR) Post #ATeJhldZNWLmWHB0BU by tedted@hachyderm.io
       2023-03-15T17:05:42Z
       
       0 likes, 1 repeats
       
       Google Zürich layoff emails went out yesterday. So far I've heard about:- pregnant people being fired- people about to go on parental leave being fired- people from Ukraine or Russia with visas depending on their job being fired, and are now at risk of having to go back to war zones or being drafted- people who just relocated to Switzerland from other international Google offices and are being fired mere weeks after arriving
       
 (DIR) Post #AUMugQZOMT9O4WTjg8 by tedted@hachyderm.io
       2023-04-06T07:25:23Z
       
       0 likes, 1 repeats
       
       This longform about IoT sensors being installed at CMU and privacy folks being unhappy about it made for a really fascinating read. There's more than a few particularly salient takeaways from this dispute…https://www.technologyreview.com/2023/04/03/1070665/cmu-university-privacy-battle-smart-building-sensors-mites/
       
 (DIR) Post #AUMugTY5Hmt3JKtEYa by tedted@hachyderm.io
       2023-04-06T07:28:49Z
       
       0 likes, 0 repeats
       
       I've worked for US companies and interacted with US folks daily for years, but some parts were still a culture shock to me as a European.Someone installs a sensor collecting audio in your private office without your consent nor an off-switch, and you get in trouble when you unscrew it from the wall? You're the one who has to apologize to your colleagues? What?!
       
 (DIR) Post #AUMugWOGijom7lJwAq by tedted@hachyderm.io
       2023-04-06T07:35:08Z
       
       0 likes, 0 repeats
       
       A really insightful point about second-order privacy issues — sensors you put in places don't only impact the people who occupy these places daily, but visitors too!(Also a great example of the careful & empathetic tone privacy engineers learn to adopt to get their point across — saying "I trust you but others might not" even though you sometimes really want to say "your project is sketchy as hell and I really don't trust you")
       
 (DIR) Post #Aa4JG4jDwL1sxwJECu by tedted@hachyderm.io
       2023-09-23T12:30:39Z
       
       2 likes, 0 repeats
       
       Just updated my list of real-world deployments of differential privacy! 🚀Changelog:- Added the Wikimedia pageview release 🌐- Added two releases by the U.S. Census Bureau, one on the demographic side, one by the economic department 🇺🇸- Added details on a COVID-19 release from Google 💉- Various maintenance fixes 🛠️https://desfontain.es/privacy/real-world-differential-privacy.html
       
 (DIR) Post #AaC39zBxZFQk3TVJNg by tedted@hachyderm.io
       2023-09-27T12:35:09Z
       
       0 likes, 0 repeats
       
       so this is apparently a thing
       
 (DIR) Post #AaCLetMv4BKI99W7jE by tedted@hachyderm.io
       2023-09-27T16:35:27Z
       
       0 likes, 0 repeats
       
       @simon Could be! Hard to say really.
       
 (DIR) Post #AcnHi6volUtm6NjGb2 by tedted@hachyderm.io
       2023-12-13T08:32:35Z
       
       0 likes, 0 repeats
       
       @JonathanAldrich @wilbowma @mdekstrand The ACM stance on this is infuriating.In the previous model, money that should have stayed in the pockets of publicly-funded research groups went instead to the ACM, for no good reason: the costs of edition & publication are tiny, so the money extracted from academia financed other ACM projects instead. Changing the model from "pay to read" to "pay to publish" does not change this basic fact! A truly legitimate solution isn't "revenue neutral" for the ACM!
       
 (DIR) Post #AjS9PqoB5yPJM7smjQ by tedted@hachyderm.io
       2024-06-28T09:07:22Z
       
       0 likes, 2 repeats
       
       I've seen more than a few folks on my Mastodon timeline express enthusiasm about Glaze, a system whose goal is to protect artists against style mimicry by generative models.If you're using this work, or otherwise have heard of it and thought "wow this is neat I'm happy people are doing it", I strongly recommend you read this blog post by security & privacy researchers Nicholas Carlini & Florian Tramèr about evaluating Glaze's claims: https://spylab.ai/blog/glaze/