Post AvtTLx3pZP0gdgkCX2 by bleepbloop@ravenation.club
(DIR) More posts by bleepbloop@ravenation.club
(DIR) Post #AvtTLubkgpMT21akUK by sue@glasgow.social
2025-07-06T18:18:27Z
0 likes, 0 repeats
Lots of folk on bluesky are dunking on a story about someone who used LLM tools to generate an app then gave up on it because they couldn't figure out how to fix bugs in it, which, ok have fun I guess. What's interesting to me is that these tools will inevitably see a drop off in engagement for this reason and I don't see much evidence of the platforms planning for the point at which folk need to actually learn something about the code they've generated..
(DIR) Post #AvtTLvzBZBhTIzYyeW by sue@glasgow.social
2025-07-06T18:24:29Z
0 likes, 0 repeats
However you feel about LLMs, a ton of software is being unleashed on the web that the creators don't understand. We're headed for a situation where even fewer people understand systems that might affect them. That's why I believe we need better learning paths to help folk explore and understand apps they didn't code. After all that's what happens when you join a company and work on an existing application. It's a decent starting point for learning to code.
(DIR) Post #AvtTLx3pZP0gdgkCX2 by bleepbloop@ravenation.club
2025-07-06T18:30:50Z
0 likes, 0 repeats
@sue I’m a developer of some 25 years. I use LLMs when i’m learning something new in the way I’d use an endlessly patient co-worker who also knows how something works. But then I know what questions to ask and how to ask them. I know to say “don’t tell me the answer I want to work it out, let’s just talk about what this feature is and what it does first.” Then when the code works I explain what I did back to the LLM to test if I understand it and I continue asking questions
(DIR) Post #AvtTLxzc6ZENWtmLb6 by SuperDicq@minidisc.tokyo
2025-07-07T15:10:23.326Z
1 likes, 0 repeats
@bleepbloop@ravenation.club @sue@glasgow.social While this sounds sensible the only issue problem with this approach is that when you query a LLM for an explanation about a specific feature of a programming language it will most likely just start making shit up and there's no way to tell if the information is accurate without checking the documentation yourself (which you could have been reading in the first place).
(DIR) Post #AvtU71xNBXGNQxAlE0 by bleepbloop@ravenation.club
2025-07-07T15:17:04Z
0 likes, 0 repeats
@SuperDicq @sue yes this is true. However I mostly know what the answer is before I start asking the question - lets say I wanted to write an IIFE & set up a MutationObserver. I am using the LLM to see what knowledge it has of the subject and I will always do confirmation with follow up questions afterwards. And this is where reading a page of static text falls short (for me). I ask questions with as much prep as I think is necessary to hasten the LLM process for me - as I would with a human.
(DIR) Post #AvtU735YyZPOwe0od6 by SuperDicq@minidisc.tokyo
2025-07-07T15:18:59.650Z
0 likes, 0 repeats
@bleepbloop@ravenation.club @sue@glasgow.social However I mostly know what the answer is before I start asking the questionIf we're already at this point I don't see how using a LLM will actually increase productivity anymore.