Post B36POCJgF68oPvOs40 by cthos@mastodon.cthos.dev
(DIR) More posts by cthos@mastodon.cthos.dev
(DIR) Post #B36POAqvgVY5sSwObw by glyph@mastodon.social
2026-02-08T04:33:22Z
0 likes, 0 repeats
@aburka The people I'm talking to don't have that problem. They care. The problem is that their understanding of the collective effects are limited. For example: they know that Anthropic and Meta did a bunch of copyright infringement and outright theft to train the models. But that theft was books, and the authors are getting a settlement anyway, and they're not using Claude Code to *write* books, so what does that have to do with them?
(DIR) Post #B36POCJgF68oPvOs40 by cthos@mastodon.cthos.dev
2026-02-08T04:41:41Z
0 likes, 0 repeats
@glyph @aburka that mindset is absolutely wild to me. How on earth do they not think past their immediate usage?
(DIR) Post #B36PODYxblxSHbYaWG by glyph@mastodon.social
2026-02-08T04:45:05Z
0 likes, 0 repeats
@cthos @aburka they do! that's why they're reading the articles about the book theft and whatnot. they are trying to understand what the problem is. and there are just so *many* things to understand. Consider security-report slop. Big problem caused by LLMs. Someone might read about that for several hours, learn that cURL had to shut down its bounty program, etc. But they're not filing slopsec garbage, so what does that have to do with them? Why's it bad to be a little faster at work?
(DIR) Post #B36POEeJZLppeV4NVI by glyph@mastodon.social
2026-02-08T04:48:41Z
0 likes, 0 repeats
@cthos @aburka I mean, honestly, that was me for a while? I spent a big chunk of 2025 trying to find ways to experiment with LLMs, trying to figure out where the safety thresholds and the error bars were, only *gradually* and extremely reluctantly landing on my current "unsafe at any speed" feeling, which I _still_ feel is like… only 75% informed and missing a lot of context. I definitely did not put it all together immediately from first principles.
(DIR) Post #B36POFeLqhSUku5vCS by glyph@mastodon.social
2026-02-08T04:36:16Z
0 likes, 0 repeats
@aburka Again, given like… 100 more toots I *could* explain exactly what that has to do with them, but it is in fact quite complicated and has not (to my knowledge) been written up for these audiences in a non-equivocal but non-judgmental way. The problem is compounded again by the amount of *noise*; once such resources exist, it's going to take a while to get people to know that they're there.
(DIR) Post #B36POFf3o41en6QUIy by cthos@mastodon.cthos.dev
2026-02-08T04:51:14Z
0 likes, 0 repeats
@glyph @aburka I get it! I have posts on my own site trying to see if these things were actively useful before I landed on "no, this is not ethically sustainable". Hell, I gave a talk about how self-driving cars were the future, a position I no longer hold! I get it. I really do. But this is like an order of magnitude different than "smartphone awesome". It's not even good _for the individuals using it_.(Also I know you know, I'm yes-and'ing your discussion here)
(DIR) Post #B36POGf65PeJtVS208 by glyph@mastodon.social
2026-02-08T04:52:45Z
0 likes, 0 repeats
@cthos @aburka Ok, yeah. All I'm saying here is that we have to do a better job collating and explaining our understanding, specifically for the audience who is having a fun time with LLM tech but is also aware that there are problems, who are not professional boosters or managers issuing dumbass mandates
(DIR) Post #B36POHPBK4VqCR0ppg by cthos@mastodon.cthos.dev
2026-02-08T04:55:56Z
0 likes, 0 repeats
@glyph @aburka In my direct personal conversations, I'm with you - I can clearly point to why this is worse than computers (technology) displacing computers (profession), but I really do not understand how one can come to the conclusion that these things are "neutral" like tech sold to us previously.
(DIR) Post #B36POIJXwVbD1FNqgi by glyph@mastodon.social
2026-02-08T04:57:41Z
0 likes, 0 repeats
@cthos @aburka if i could summarize how people can come to that conclusion, "people have a lot going on right now and there's a ton of propaganda telling them it's fine"
(DIR) Post #B36POJEGXcy9r9v960 by glyph@mastodon.social
2026-02-08T04:59:57Z
0 likes, 0 repeats
@cthos @aburka well, that, and the immediate high of the "wow, look how fast and well it did this tedious thing! gosh I bet it could do ANYTHING!" magic trick. extremely easy to fall for, and really not comparable to any other kind of tech in terms of the gap between the demo scenario and the reality. like I've never used anything that held up to so many test-case hypotheticals and then collapsed so hard under production use
(DIR) Post #B36POJyhky7GBBeETo by glyph@mastodon.social
2026-02-08T05:01:39Z
0 likes, 0 repeats
@cthos @aburka so many people have wondered, "wow is it really that good?" tried out Claude Code or Cursor and they feel like they have *done* the experiment, and there's tremendous upside, and they understand the results. Anecdotally if you fall into this rabbit hole and buy in, it takes like 1-2 _years_ to hit the reality of its defects. Maybe longer on average, many people are still stuck in the middle of it, but that seems the average for the mea-culpa "I don't use AI any more" confessional
(DIR) Post #B36POKFijhj91xbp0C by glyph@mastodon.social
2026-02-08T04:40:38Z
0 likes, 0 repeats
@aburka This audience, who is really *open* to being convinced, is also getting yelled at a lot in this sort of context, and not really understanding the yelling, in exactly this way. i.e.: "I made a web app to help with a particular quirk of my wedding planning that no existing sites can accommodate, and now I'm getting called a fascist because… what… the tool I used is controlled by a billionaire? Grow up, _everything_ in our society is controlled by billionaires!"
(DIR) Post #B36POKeBGlIEFp3M80 by glyph@mastodon.social
2026-02-08T05:03:17Z
0 likes, 0 repeats
@cthos @aburka I've written about this before re: the relatively principled subset of tech leadership, _usually_ if you run some bit of tech through its paces, construct a bunch of ship-in-a-bottle scenarios, run some benchmarks, you can figure out what its properties are and see how it fails in a lab environment. LLMs seem to actively resist that sort of interrogation
(DIR) Post #B36POLK0lEkmLYclKS by glyph@mastodon.social
2026-02-08T05:04:38Z
0 likes, 0 repeats
@cthos @aburka in fact my _own_ understanding is significantly informed by observing over the course of the last year, "huh it seems like there's a huge gap between the subjective experience of users and the empirical long-term measurements of efficacy here" and eventually landing on "I regrettably cannot trust anyone's lived experience with LLMs" which is not a comfortable place to be with respect to the rest of my politics.
(DIR) Post #B36POLn51A0PniDydc by cthos@mastodon.cthos.dev
2026-02-08T05:06:19Z
1 likes, 0 repeats
@glyph @aburka Humans interact with language. Things emulating language short circuit the brain in such a way that we tend to ascribe all sorts of humanness to the not human thing. Which is to say: Plausible text generation "breaks" people's brains.
(DIR) Post #B36POOV6wQPmCqqAXw by glyph@mastodon.social
2026-02-08T04:42:14Z
0 likes, 0 repeats
@aburka If you're used to arguing with boosters, it's very easy to miss a pretty big audience of people who substantively agree with you politically, *are* pretty informed, do care about these issues but are not themselves hip-deep in understanding the systemic morass that it touches in every direction