Post B5Pr209WSChkMZGY2C by david_chisnall@infosec.exchange
 (DIR) More posts by david_chisnall@infosec.exchange
 (DIR) Post #B5PisHbNvJOpx69YnY by mjg59@nondeterministic.computer
       2026-04-18T08:14:40Z
       
       0 likes, 1 repeats
       
       Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that
       
 (DIR) Post #B5PisUQWF1QrjAAZAO by mjg59@nondeterministic.computer
       2026-04-18T08:17:49Z
       
       1 likes, 0 repeats
       
       When I write code I am turning a creative idea into a mechanical embodiment of that idea. I am not creating beauty. Every line of code I write is a copy of another line of code I've read somewhere before, lightly modified to meet my needs. My code is not intended to evoke emotion. It does not change people think about the world. The idea→code pipeline in my head is not obviously distinguishable from the prompt->code process in an LLM
       
 (DIR) Post #B5PishRhpv6e6bpCK0 by mjg59@nondeterministic.computer
       2026-04-18T08:20:31Z
       
       0 likes, 0 repeats
       
       Look, coders, we are not writers. There's no way to turn "increment this variable" into life changing prose. The creativity exists outside the code. It always has done and it always will do. Let it go.
       
 (DIR) Post #B5Pisv3lDkGqKOjGwS by mjg59@nondeterministic.computer
       2026-04-18T08:23:06Z
       
       1 likes, 0 repeats
       
       
       
 (DIR) Post #B5Pit7nvpuKjr4QJZg by mjg59@nondeterministic.computer
       2026-04-18T08:25:15Z
       
       0 likes, 0 repeats
       
       (Yes ok there are cases where code is beauty and embodies an idea that could make a grown man cry and:(1) your code is not that code(2) you would think nothing of copying the creative aspect of that code if you needed to don't fucking lie to me)
       
 (DIR) Post #B5PitJruyueVGuNfSC by mjg59@nondeterministic.computer
       2026-04-18T08:29:36Z
       
       0 likes, 0 repeats
       
       Personally I'm not going to literally copy code from a codebase under an incompatible license because that is what the law says, but have I read proprietary code and learned the underlying creative aspect and then written new code that embodies it? Yes! Anyone claiming otherwise is lying!
       
 (DIR) Post #B5PjyOaKyOnJIl8qxs by jenesuispasgoth@pouet.chapril.org
       2026-04-18T08:44:08Z
       
       0 likes, 0 repeats
       
       @mjg59 I agree not all code is art, and often not even craft. But contrary to optimizing compilers, we're not yet at a point where the generated code only needs to be read/modified by a handful of optimization experts, as it is with ASM. The generated code isn't even reliably identical between 2 prompts.The AIGen'd code I've seen can be quite elegant taken in isolation, but looks a lot like a Frankenstein'd behemoth when I look at "large" (beyond toy project) code bases.
       
 (DIR) Post #B5PjyPCcg3Q3DV3Qdk by mjg59@nondeterministic.computer
       2026-04-18T09:24:03Z
       
       0 likes, 0 repeats
       
       @jenesuispasgoth I mean kind of the point of free software is that people get to modify it to their own ends and that doesn't mean it has to be good - when I first started hacking things to meet my needs I was definitely writing stuff that couldn't be upstreamed, but it worked for me, and making it easier for others to do that is a win
       
 (DIR) Post #B5PkscJB8aOC51vNCK by p
       2026-04-18T09:46:38.533749Z
       
       6 likes, 1 repeats
       
       @mjg59 > There's no way to turn "increment this variable" into life changing prose."There's no possibility for prose to be beautiful.  There's no way to turn 'What time is it?' into life-changing prose."> (1) your code is not that codeMaybe *yours* isn't.
       
 (DIR) Post #B5PmFc3oKDRDvTGVu4 by radex@social.hackerspace.pl
       2026-04-18T08:34:55Z
       
       0 likes, 0 repeats
       
       @mjg59 This doesn't feel right to me. IMO few people actually object to use of LLMs by individuals for tinkering on personal stuff.The criticism as I see it is primarily that:1) there are huge societal/political impacts - uncompensated use of copyrighted material; benefits of it accruing primarily to a few big players; energy use; layoffs; perceived misallocation of massive amounts of capital2) the output quality of LLMs is t r a s h, unsuitable for professional use
       
 (DIR) Post #B5PmFd7kN4BHDy7Ag4 by mjg59@nondeterministic.computer
       2026-04-18T09:04:24Z
       
       0 likes, 0 repeats
       
       @radex See I fundamentally don't believe that code should be copyrightable and also me 30 years ago did not produce code that was suitable for professional use but it fixed my problems anyway
       
 (DIR) Post #B5PotNJoknydhFq9wW by mjg59@nondeterministic.computer
       2026-04-18T09:48:47Z
       
       0 likes, 0 repeats
       
       @p If you're doing something other thanvar++then you're doing something wrong. Code is instructions to a machine. The description of what that code does may be creative, if the actual implementation is then you are almost certainly in a bad place.
       
 (DIR) Post #B5PpsSXZxA9bYaP7ey by ignaloidas@not.acu.lt
       2026-04-18T10:42:34.367Z
       
       0 likes, 0 repeats
       
       @mjg59@nondeterministic.computer my problem with this argument is that LLMs aren't good at modifying the software, nor are they good at creating software that's easily modifiable.Also, I'd note that it's less free software people, and people who are interested in quality software, and it's that interest that has driven them to free software, because most free software is too high of quality for most companies to make/buy from an economical standpoint.
       
 (DIR) Post #B5Pr209WSChkMZGY2C by david_chisnall@infosec.exchange
       2026-04-18T10:47:50Z
       
       1 likes, 1 repeats
       
       @mjg59 I’ve heard this argument before and I disagree with it. My goal for Free Software is to enable users, but that requires users have agency. Users being able to modify code to do what they want? Great! Users being given a black box that will modify their code in a way that might do what they want but will fail in unpredictable ways, without giving them any mechanism to build a mental model of those failure modes? Terrible!I am not a carpenter but I have an electric screwdriver. It’s great. It lets me turn screws with much less effort than a manual one. There are a bunch of places where it doesn’t work, but that’s fine, I can understand those and use the harder-to-use tool in places where it won’t work. I can build a mental model of when not to use it and why it doesn’t work and how it will fail. I love building the software equivalent of this, things that let end users change code in ways I didn’t anticipate.But LLM coding is not like this. It’s like a nail gun that has a 1% chance of firing backwards. 99% of the time, it’s much easier than using a hammer. 1% of the time you lose an eye. And you have no way of knowing which it will be. The same prompt, given to the same model, two days in a row, may give you a program that does what you want one time and a program that looks like it does what you want but silently corrupts your data the next time. That’s not empowering users, that’s removing agency from users. Tools that empower users are ones that make it easy for users to build a (nicely abstracted, ignoring details that are irrelevant to them) mental model of how the system works and therefor the ability to change it in precise ways. Tools that remove agency from users take their ability to reason about how systems work and how to effect precise change.I have zero interest in enabling tools that remove agency from users.
       
 (DIR) Post #B5PxfVLN1GYm7S1l5s by raymaccarthy@mastodon.ie
       2026-04-18T12:09:53Z
       
       1 likes, 0 repeats
       
       @mjg59 @p You don't understand IP/Copyright or maybe even actual programming, of which actual coding or editin code should be a minority of the effort.You're simply promoting theft for the sake of convenience. The USA & China companies are simply ignoring the laws in their training.
       
 (DIR) Post #B5Q8a9jL36sP8NtGqm by ignaloidas@not.acu.lt
       2026-04-18T14:12:09.781Z
       
       0 likes, 0 repeats
       
       @mnl@hachyderm.io @newhinton@troet.cafe @david_chisnall@infosec.exchange @mjg59@nondeterministic.computer the difference is that you can gain trust that some author knows his stuff in a specific field and you no longer need to cross-check every single thing that they write.With an LLM no such trust can be developed, because fundamentally it's just rolling dice out of a modeled distribution, the fact that the LLM was right about something 9 previous times has no influence whether the next statement will be correct or wrong.It's these trust relationships that allow to work efficiently - cross checking everything every time is incredibly time consuming.
       
 (DIR) Post #B5Q9n52PuF7CIfXayW by kyle@mastodon.kylerank.in
       2026-04-18T14:25:41Z
       
       0 likes, 0 repeats
       
       @mjg59 You will get backlash, but you are right. Free software folks will have to decide whether what they really wanted was *everyone* to have the freedom to use and modify software, or only that subset of everyone who had the privilege of learning software development.There has always been this elitist dividing line in the community between people who contribute code, and people who contribute all the other things FOSS needs to thrive. Now those people can contribute code too.
       
 (DIR) Post #B5QAPuynEl0YOkMhdo by ignaloidas@not.acu.lt
       2026-04-18T14:32:43.637Z
       
       0 likes, 0 repeats
       
       @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe the training objective is not "be correct", so that's not what the models are trained on. They aren't trained on such an objective because there's no way to score it - if you had a system that could determine whether a statement was correct, then you could just use that. No, what the models are trained on are globs of existing text, targeting the continuations to be the same as the text. Notably, most(all?) LLM makers don't even care whether most of the text is "correct" (in any sense sense of the word), and "solve" it by training on some more carefully selected globs of text. And in the end, what the model itself outputs are probabilities of a specific token (not even a sentence or something) to be next. The text you get is all just dice rolls on those probabilities, again and again.It is a text prediction machine. A very powerful one, but it's just a prediction. It just picks whatever is likely, with no regard with what is correct
       
 (DIR) Post #B5QBv9kqSBot9ZIh0K by ignaloidas@not.acu.lt
       2026-04-18T14:49:34.727Z
       
       0 likes, 0 repeats
       
       @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe all of that training is still continuation based because that is what the models predict. Yes, there is a bunch of research, and honestly, most of it is banging head against fundamental issues of the model, but is still being funded because LLMs are at the end of it all, quite useless if they just spit nonsense from time to time and it's indistinguishable from sensible stuff without carefully cross-checking it all.Tool calls are just that - tools to add stuff into the context for further prediction, but they in no way do anything to make sure that the LLM output is correct, because once again - everything is treated as a continuation after the tool call, and it's just predicting, what's the most likely thing to do, not what's the correct thing to do.
       
 (DIR) Post #B5QDE64AUUstAcvynQ by ignaloidas@not.acu.lt
       2026-04-18T15:04:12.251Z
       
       0 likes, 0 repeats
       
       @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe Not blindly, of course, but I build up trust relationships with people I work with. And I do trust my own code to a certain extent. I can't trust a bunch of dice. The fact that you don't trust your own code at all honestly tells me all I ever need to know about you.
       
 (DIR) Post #B5QFsVkScmDUJW7BMO by zachdecook@social.librem.one
       2026-04-18T15:33:52Z
       
       0 likes, 0 repeats
       
       @kyle @mjg59 Proprietary tooling is the reason "Stallman was right" about Bitkeeper, but "everyone was better off for having not listened to him" is the pragmatic side.Yes, I want people to benefit from the freedom to modify code, but they will never truly be free if they are using a proprietary LLM to make their modifications.
       
 (DIR) Post #B5QI6ZMKCF71TUemAK by ignaloidas@not.acu.lt
       2026-04-18T15:58:52.277Z
       
       0 likes, 0 repeats
       
       @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe through repeated checks and knowledge that humans are consistent.And like, really, you don't trust your code at all? I, for example, know that the code I wrote is not going to cheat by unit tests, not going to re-implement half of the things from scratch when I'm working on a small feature, nor will it randomly delete files. After working with people for a while, I can be fairly sure that the code they've written can be trusted to the same standards. LLMs can't be trusted with these things, and in fact have been documented to do all of these things.It is not a blind, absolute trust, but trust within reason. The fact that I have to explain this to you is honestly embarrassing.
       
 (DIR) Post #B5QIEmxpUuYFzD7Pn6 by condret@shitposter.world
       2026-04-18T16:00:24.434528Z
       
       0 likes, 1 repeats
       
       @mjg59 i actually like LLMs
       
 (DIR) Post #B5QIL8oDBwxxTTeIV6 by ignaloidas@not.acu.lt
       2026-04-18T16:01:30.227Z
       
       0 likes, 0 repeats
       
       @mnl@hachyderm.io @engideer@tech.lgbt @david_chisnall@infosec.exchange @mjg59@nondeterministic.computer LLMs are very much a random number generators. The distribution is far, far from uniform, but the whole breakthrough of LLMs was the introduction of "temperature", quite literally random choices, to break them out of monotonous tendencies.
       
 (DIR) Post #B5QIdr01dBfRk266xk by condret@shitposter.world
       2026-04-18T16:04:56.152409Z
       
       1 likes, 1 repeats
       
       @radex @mjg59 2) is not true. glm-5 produces actually good code most of the time. sure you need to do a few adjustments here and there from time to time, but it isn't trash
       
 (DIR) Post #B5QIl23ytWF85AN3PU by ignaloidas@not.acu.lt
       2026-04-18T16:06:11.118Z
       
       0 likes, 0 repeats
       
       @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe you are falling down the cryptocurrency fallacy, assuming that you cannot trust anyone and as such have to build stuff assuming everyone is looking to get one over you.This is tiresome, and I do not care to discuss with you on this any longer, if you cannot understand that there are levels between "no trust" and "absolute trust", there is nothing more to discuss.
       
 (DIR) Post #B5QJ3F8z024lrcOUS0 by toiletpaper@shitposter.world
       2026-04-18T16:09:31.060169Z
       
       0 likes, 1 repeats
       
       @condret @radex @mjg59 The most useful pattern for using AI code assistance in my experience is test-drive-development. As long as you make sure the tests are robust and have good coverage, the rest is pretty much hands-free. That's not all there is to it, but it's the biggest bang for buck IME.
       
 (DIR) Post #B5QJMgUFYy4sAWbzOa by ignaloidas@not.acu.lt
       2026-04-18T16:12:59.327Z
       
       1 likes, 0 repeats
       
       @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @engideer@tech.lgbt the fact that something is random does not mean that it has a uniform distribution. "controlled randomness" is still randomness. Taking random points in a unit circle by taking two random numbers for distance and direction will not result in a uniform distribution, but it's still random.like, do you even read what you're writing? I'm starting to understand why you don't trust the code you wrote
       
 (DIR) Post #B5QXmCcWcxDeQWXpSq by p
       2026-04-18T18:54:31.733628Z
       
       2 likes, 1 repeats
       
       @mjg59 If you measure prose by looking at corporate emails and VCR manuals, then you come to the same conclusion.When I write prose, I'm putting my thoughts into English.  When I write code, I'm telling my thoughts to a machine.Even a little script, I'm teaching the machine how I want to talk to it.  I put it in ~/bin and I've taught the computer a word.  I build up my little environment where the machine and I understand each other.  Dick Gabriel, the "Worse is Better" author, said:> I'm always delighted by the light touch and stillness of early programming languages. Not much text; a lot gets done. Old programs read like quiet conversations between a well-spoken research worker and a well-studied mechanical colleague, not as a debate with a compiler. Who'd have guessed sophistication bought such noise?If style and thoughts couldn't come out through the code, he wouldn't be able to say something like that.  ken, when describing his compiler bug, started off talking about adding '\v' to the C compiler.  First he hard-coded the numeric value for '\v':  `if(c == 'v') return 11;`.  Then, because the C compiler was written in C, he could write `if(c == 'v') return '\v';`.  And he said "It is as close to a 'learning' program as I have ever seen."  He's taught the machine.  A lot of people have read the paper, but you can go read ken's code, a lot of it is out there.  (You can download a CD image, mount it, and look at his code:  http://9legacy.org/download.html .)   You can see a style of thinking, you can see ken in his code.  Maybe you can't see someone's personality in a four-page technical manual that comes with your refrigerator, maybe you can't see someone's personality in a webapp at your day job, but that doesn't mean it's impossible to create something beautiful.Here is a small program: echo '++++[->++++<]>[-<+++++>>+++++++>++<<]<------.>>+++++.--.+.>.<[-<+>>>+<<]>[--<++>>-<]>---.+++++++++++++.+.<<<.>>>-------.---.<<<--.>.>>+++.-------.++.++[->+<<+>]>++++++.<<.<<.>[-<<->>]<<++++.[>>>--<<<-]>>>+.' | \ sed -E 's/(.)/\1\n/g' | \ awk 'BEGIN{print "BEGIN{p=0;"}END{print "}"}/\./{print "printf \"%c\",a[p]"}/\+/{print "a[p]++"}/-/{print "a[p]--"}/</{print "p--"}/>/{print "p++"}/\[/{print "while(a[p]){"}/\]/{print "}"}' | \ awk -f /dev/fd/0Every coder I have showed this program to in person has laughed:  why did they laugh?p761-thompson.pdf
       
 (DIR) Post #B5QaTyAVv5V0FtRKxk by p
       2026-04-18T19:24:50.930934Z
       
       1 likes, 0 repeats
       
       @raymaccarthy @mjg59 I don't really hate LLMs per se, but they do generate this soulless "enterprisey" code as an artifact of how they're trained.  The thing that rubbed me the wrong way about the series of posts was mainly that it's this call to mediocrity.And then he uses Roast Beef as an example; Roast Beef is severely depressed and has this pathological self-deprecation and also is not a real hacker:  he's a drawing of a dog.  But one of the reasons Achewood sticks with people is Onstad is brilliant with his use of language and is pretty good at sketching personalities.  There are people that say comics are not art, Ebert went to his grave insisting video games cannot be art, this guy is saying to the reader, specifically, that some code can be art but then says "Your code sucks, it's never going to be beautiful" and he uses this guy as an example:2007-02-02
       
 (DIR) Post #B5QoWmxXS0cxiXbq4G by mjg59@nondeterministic.computer
       2026-04-18T09:27:43Z
       
       0 likes, 0 repeats
       
       Clearly my most unpopular thread ever, so let me add a clarification: submitting LLM generated code you don't understand to an upstream project is absolute bullshit and you should never do that. Having an LLM turn an existing codebase into something that meets your local needs? Do it. The code may be awful, it may break stuff you don't care about, and that's what all my early patches to free software looked like. It's ok to solve your problem locally.
       
 (DIR) Post #B5QoWnvnpwpijRny08 by strypey@mastodon.nzoss.nz
       2026-04-18T22:02:02Z
       
       0 likes, 0 repeats
       
       @mjg59 > Having an LLM turn an existing codebase into something that meets your local needs... Is a great to way to introduce hallucinated library names into your code, that can be squatted by people distributing malware. Even for this, running code vomited up by a Trained MOLE is self-punching. Using its shitty, broken code as a source of serendipity, to help you find new ways to solve a problem in your own code, ok, *maybe*.
       
 (DIR) Post #B5Qs3tby2ULFlczw12 by AGARTHA_NOBLE@shortstacksran.ch
       2026-04-18T22:41:24.108899Z
       
       1 likes, 0 repeats
       
       @p @raymaccarthy @mjg59 There's nothing code can be EXCEPT art. Modern high-level languages have so many different ways to skin a cat that you need a strong dogma just to be able to complete anything more complex than a MySpace page. A lot of code preference is entirely arbitrary, but the preference itself is required; else, you become The Framework Guy.
       
 (DIR) Post #B5QxUUQAIttbEKygSm by p
       2026-04-18T23:42:39.500598Z
       
       0 likes, 0 repeats
       
       @AGARTHA_NOBLE @mjg59 @raymaccarthy I mean, it takes a lot of discipline to remove the soul from some prose; I think code's not any different.
       
 (DIR) Post #B5R3Oj5NT2MyhQtD5E by mjg59@nondeterministic.computer
       2026-04-19T00:47:02Z
       
       1 likes, 0 repeats
       
       @raymaccarthy @p I am more familiar with both than I want to be
       
 (DIR) Post #B5R6oNEzbmvKLAJArQ by bazkie@beige.party
       2026-04-18T15:46:31Z
       
       1 likes, 0 repeats
       
       @mjg59 LLMs do not enable that at all tho? an LLM enables people to make software behave as they wish similarly to a crowbar enabling people to open a door
       
 (DIR) Post #B5R6oO8IIB9x6gBL3g by mjg59@nondeterministic.computer
       2026-04-19T01:18:29Z
       
       1 likes, 0 repeats
       
       @bazkie A completely legitimate thing to do if all you care about is getting through the door
       
 (DIR) Post #B5RbLjf2GleQ3XJVzc by Pi_rat@freesoftwareextremist.com
       2026-04-19T07:09:14.247883Z
       
       1 likes, 0 repeats
       
       @mjg59 Bait or retardation, call it.>A major goal of free software is for individuals to be able to cause software to behave in the way they want it toNo, it's F-r-e-e-d-o-m, it's in the name if you could read.>LLMs: (enable that)(Don't think so)>Free software people: Oh no not like that"Sell your soul to word salad demon to be free(tm)(r)(c)"
       
 (DIR) Post #B5RbPbNH1K2xVwqicy by rafaelmartins@mastodon.social
       2026-04-18T17:55:55Z
       
       1 likes, 0 repeats
       
       @mjg59 years of reputation thrown away on a single thread: a masterclass
       
 (DIR) Post #B5Rd0iEuYhbpTVCJEW by mjg59@nondeterministic.computer
       2026-04-19T07:16:27Z
       
       0 likes, 0 repeats
       
       @Pi_rat "The freedom to study how the program works, and change it so it does your computing as you wish" is literally one of the FSF's four freedoms
       
 (DIR) Post #B5Rd0jMkN3TGy5s55M by Pi_rat@freesoftwareextremist.com
       2026-04-19T07:27:49.770473Z
       
       0 likes, 0 repeats
       
       @mjg59 Not a lot of freedom in LLMs