[HN Gopher] Developing our position on AI
___________________________________________________________________
Developing our position on AI
Author : jakelazaroff
Score : 245 points
Date : 2025-07-23 19:34 UTC (3 days ago)
(HTM) web link (www.recurse.com)
(TXT) w3m dump (www.recurse.com)
| nicholasjbs wrote:
| (Author here.)
|
| This was a really fascinating project to work on because of the
| breadth of experiences and perspectives people have on LLMs, even
| when those people all otherwise have a lot in common (in this
| case, experienced programmers, all Recurse Center alums, all
| professional programmers in some capacity, almost all in the US,
| etc). I can't think of another area in programming where opinions
| differ this much.
| itwasntandy wrote:
| Thank you Nick.
|
| As a recurse alum (s14 batch 2) I loved reading this. I loved
| my time at recurse and learned lots. This highlight from the
| post really resonates:
|
| " Real growth happens at the boundary of what you can do and
| what you can almost do. Used well, LLMs can help you more
| quickly find or even expand your edge, but they risk creating a
| gap between the edge of what you can produce and what you can
| understand.
|
| RC is a place for rigor. You should strive to be more rigorous,
| not less, when using AI-powered tools to learn, though exactly
| what you need to be rigorous about is likely different when
| using them."
| vouaobrasil wrote:
| > RC is a place for rigor. You should strive to be more rigorous,
| not less, when using AI-powered tools to learn, though exactly
| what you need to be rigorous about is likely different when using
| them.
|
| This brings about an important point for a LOT of tools, which
| many people don't talk about: namely, with a tool as powerful as
| AI, there will always be minority of people with healthy and
| thoughtful attitude towards its use, but a majority who use it
| improperly because its power is too seductive and human beings on
| average are lazy.
|
| Therefore, even if you "strive to be more rigorous", you WILL be
| a minority helping to drive a technology that is just too
| powerful to make any positive impact on the majority. The
| majority will suffer because they need to have an environment
| where they are forced not to cheat in order to learn and have
| basic competence, which I'd argue is far more crucial to a
| society that the top few having a lot of competence.
|
| The individualistic will say that this is an inevitable price for
| freedom, but in practice, I think it's misguided. Universities,
| for example, NEED to monitor the exam room, because otherwise
| cheating would be rampant, even if there is a decent minority of
| students who would NOT cheat, simply because they want to
| maximize their learning.
|
| With such powerful tools as AI, we need to think beyond our
| individualistic tendencies. The disciplined will often tout their
| balanced philosophy as justification for that tool use, such as
| this Recurse post is doing here, but what they are forgetting is
| that by promoting such a philosophy, it brings more legitimacy
| into the use of AI, for which the general world is not capable of
| handling.
|
| In a fragile world, we must take responsibility beyond ourselves,
| and not promote dangerous tools even if a minority can use them
| properly.
|
| This is why I am 100% against AI - no compromise.
| ctoth wrote:
| Wait, you're literally advocating for handicapping everyone
| because some people can't handle the tools as well as others.
|
| "The disciplined minority can use AI well, but the lazy
| majority can't, so nobody gets to use it" I feel like I read
| this somewhere. Maybe a short story?
|
| Should we ban calculators because some students become
| dependent on them? Ban the internet because people use it to
| watch cat videos instead of learning?
|
| You've dressed up "hold everyone back to protect the
| incompetent" as social responsibility.
|
| I never actually thought I would find someone who read Harrison
| Bergeron and said "you know what? let's do that!" But the
| Internet truly is a vast and terrifying place.
| vouaobrasil wrote:
| A rather shallow reply, because I never implied that there
| should be enforced equality. For some reason, I get these
| sorts of "false dichotomy" replies constantly here, where the
| dichotomy is very strong exaggerated. Maybe it's due to the
| computer scientist's constant use of binary, who knows.
|
| Regardless, I only advocate for restricting technologies that
| are too dangerous, much in the same way as atomic weapons are
| highly restricted by people can still own knives and even use
| guns in some circumstances.
|
| I have nothing against the most intelligent using their
| intelligence wisely and doing more than the less intelligent,
| if only wise use is even possible. In the case of AI, I
| submit that it is not.
| ctoth wrote:
| Who decides what technologies are too dangerous? You,
| apparently.
|
| AI isn't nukes - anyone can train a model at home. There's
| no centralized thing to restrict. So what's your actual
| ask? That nobody ever trains a model? That we collectively
| pretend transformers don't exist?
|
| You're dressing up bog-standard tech panic as social
| responsibility. Same reaction to every new technology:
| "This tool might be misused so nobody should have it."
|
| If you can't see the connection between that and Harrison
| Bergeron's "some people excel so we must handicap
| everyone," then you've missed Vonnegut's entire point.
| You're not protecting the weak - you're enforcing
| mediocrity and calling it virtue.
| vouaobrasil wrote:
| > Who decides what technologies are too dangerous? You,
| apparently.
|
| Again, a rather knee-jerk reply. I am opening up the
| discussion, and putting out my opinion. I never said I
| should be God and arbiter, but I do think people in
| general should have a discussion about it, and general
| discussion starts with opinion.
|
| > AI isn't nukes - anyone can train a model at home.
| There's no centralized thing to restrict. So what's your
| actual ask? That nobody ever trains a model? That we
| collectively pretend transformers don't exist?
|
| It should be something to consider. We could stop it by
| spreading a social taboo about it, denigrate the use of
| it, etc. It's possible. Many non techies already hate AI,
| and mob force is not out of the question.
|
| > You're dressing up bog-standard tech panic as social
| responsibility. Same reaction to every new technology:
| "This tool might be misused so nobody should have it."
|
| I don't have that reaction to every new technology
| personally. But I think we should ask the question of
| every new technology, and especially onces that are
| already disrupting the labor market.
|
| > If you can't see the connection between that and
| Harrison Bergeron's "some people excel so we must
| handicap everyone," then you've missed Vonnegut's entire
| point. You're not protecting the weak - you're enforcing
| mediocrity and calling it virtue.
|
| What people call excellent and mediocre these days is
| often just the capacity to be economically over-ruthless,
| rather than contribute any good to society. We already
| have a wealth of ways that people can excel, even if we
| eradicated AI. So there's definitely no limitation on
| intelligent individuals to be excellent, even if we
| destroyed AI. So your argument really doesn't hold.
|
| Edit: my goal isn't to protect the weak. I'd rather have
| everyone protected, including the very intelligent who
| still want to have a place to use their intelligence on
| their own and not be forced to use AI to keep up.
| ben_w wrote:
| > Who decides what technologies are too dangerous? You,
| apparently.
|
| I see takes like this from time to time about everything.
|
| They didn't say that.
|
| As with all similar cases, they're allowed to advocate
| for whatever being dangerous, and you're allowed to say
| it isn't, the people who decide is _all of us
| collectively_ and when we 're at our best we do so on the
| basis of the actual arguments.
|
| > AI isn't nukes - anyone can train a model at home.
|
| (1) They were using an extreme to illustrate the point.
|
| (2) Anyone can make a lot of things at home. I know two
| distinct ways to make a chemical weapon using only things
| I can find in a normal kitchen. That people can do a
| thing at home doesn't make the thing "not prohibited".
| binary132 wrote:
| Hyphenatic phrasing detected. Deploying LLM snoopers.
| usernamed7 wrote:
| Why are you putting down a well reasoned reply as being
| shallow? Isn't that... shallow? Is it because you don't
| want people to disagree with you or point out flaws in your
| arguments? Because you seem to take an absolutist
| black/white approach and disregard any sense of nuanced
| approach.
| vouaobrasil wrote:
| I do want people to argue or point out flaws. But
| presenting a false dichotomy is not a well-reasoned
| reply.
| pyman wrote:
| > even if a minority can use them properly.
|
| Most students today are AI fluent. Most teachers aren't.
| Students treat AI like Google Search, StackOverflow,
| GitHub, and every other dev tool.
| mmcclure wrote:
| _Some_ students treat AI like those things. Others are
| effectively a meat proxy for AI. Both ends of the
| spectrum would call themselves "AI fluent."
|
| I don't think the existence of the latter should mean we
| restrict access to AI for everyone, but I also don't
| think it's helpful to pretend AI is just this
| generation's TI-83.
| Karrot_Kream wrote:
| The rebuttal is very simple. I'll try and make it a bit
| less emotionally charged and clear even if your original
| opinion did not appear to me to go through the same
| process:
|
| "While some may use the tool irresponsibly, others will
| not, and therefore there's no need to restrict the tool.
| Society shouldn't handicap the majority to accommodate
| the minority."
|
| You can choose to not engage with this critique but
| calling it a "false dichotomy" is in poor form. If
| anything, it makes me feel like you're not willing to
| entertain disagreement. You state that you want to start
| a discussion by expressing your opinion but I don't see a
| discussion here. I observe you expressing your opinion
| and dismissing criticism of that opinion as false.
| collingreen wrote:
| I don't have a dog in this fight but I think the counter
| argument was a terrible straw man. Op said it's too
| dangerous to put in general hands. Treating that like
| "protect the incompetent from themselves and punish
| everyone in the process" is badly twisting the point. A
| closer oversimplification is "protect the public from the
| incompetents".
|
| In my mind a direct, good faith rebuttal would address
| the actual points - either disagree that the worst usage
| would lead to harm of the public or make a point (like
| the op tees up) that risking the public is one of worthy
| tradeoffs of freedom.
| tptacek wrote:
| The original post concluded with the sentence "This is
| why I am 100% against AI - no compromise." Not "AI is too
| dangerous for general hands".
| vouaobrasil wrote:
| My arguments are nuanced, but there's nothing saying a
| final position has to be. Nuanced arguments can lead to a
| true unilateral position.
| jononor wrote:
| Why is "AI" (current LLM based systems) a danger on the
| level comparable to nukes? Not saying that it is not, just
| would like to understand your reasoning.
| vouaobrasil wrote:
| Second reply to your expanded comment: I think in some cases,
| some technologies are just versions of the prisoner's dilemma
| where no one is really better off with the technology. And
| one must decide on a case by case basis, similar to how the
| Amish decide what is best for their society on a case by case
| basis.
|
| Again, even your expanded reply shrieks with false dichotomy.
| I never said ban every possible technology, only ones that
| are sufficiently dangerous.
| atq2119 wrote:
| > Wait, you're literally advocating for handicapping everyone
| because some people can't handle the tools as well as others.
|
| No, they're arguing on the grounds that the tools are
| detrimental to the overwhelming majority in a way that also
| ends up being detrimental to the disciplined minority!
|
| I'm not sure I agree, but either way you aren't properly
| engaging their actual argument.
| jononor wrote:
| I agree with your reasoning. But the conclusion seems to be
| throwing the baby out with the bathwater?
|
| The same line of thought can be used for any (new) tool, say a
| calculator, a computer or the internet. Shouldn't we try to
| find responsible ways of adopting LLMs, that empower the
| majority?
| vouaobrasil wrote:
| > The same line of thought can be used for any (new) tool,
| say a calculator, a computer or the internet.
|
| Yes, the same line of thought can. But we must also take
| power into account. The severity of the negative effects of a
| technology is proportional to its power and a calculator is
| relatively week.
|
| > Shouldn't we try to find responsible ways of adopting LLMs,
| that empower the majority?
|
| Not if there is no responsible way to adopt them because they
| are fundamentally against a happy existence by their very
| nature. Not all technology empowers, even when used
| completely fairly. Some technology approaches a pure arms
| race scenario, especially when the proportion of its effect
| is mainly economic efficiency without true life improvement,
| at least for the majority.
|
| Of course, one can point to some benefits of LLMs, but my
| thesis is that the benefit/cost quantity approaches zero and
| thus crosses the point of diminishing returns to give us only
| a net negative in all possible worlds where the basic
| assumptions of human nature hold.
| thedevilslawyer wrote:
| Wait till you learn a minoity of prettier people end up having
| easier lives than the 90% majority. What will you recommend I
| wonder?
| vouaobrasil wrote:
| I won't recommend anything. Every situation is different and
| you are arbitrarily transposing my argument rudely into
| another without much real thought, which is a shame. For
| instance, one thing you are ignoring is that we are
| evolutionary geared to handle situations of varying beauty.
|
| I could point out many more differences between the two
| situations but I won't because your lack of any intellectual
| effort doesn't even deserve a reply.
| thedevilslawyer wrote:
| Sure, I guess we bow before your explosive arguments of
| intellectual devastation.
| entaloneralie wrote:
| I feel like John Holt, author of Unschooling, who is quoted
| numerous times in the article, would not be too keen on seeing
| his name in a post legitimizes a technology that uses
| inevitabilism to insert itself in all domains of life.
|
| --
|
| "Technology Review," the magazine of MIT, ran a short article in
| January called "Housebreaking the Software" by Robert Cowen,
| science editor of the "Christian Science Monitor," in which he
| very sensibly said: "The general-purpose home computer for the
| average user has not yet arrived.
|
| Neither the software nor the information services accessible via
| telephone are yet good enough to justify such a purchase unless
| there is a specialized need. Thus, if you have the cash for a
| home computer but no clear need for one yet, you would be better
| advised to put it in liquid investment for two or three more
| years." But in the next paragraph he says "Those who would stand
| aside from this revolution will, by this decade's end, find
| themselves as much of an anachronism as those who yearn for the
| good old one-horse shay." This is mostly just hot air.
|
| What does it mean to be an anachronism? Am I one because I don't
| own a car or a TV? Is something bad supposed to happen to me
| because of that? What about the horse and buggy Amish? They are,
| as a group, the most successful farmers in the country,
| everywhere buying up farms that up-to-date high-tech farmers have
| had to sell because they couldn't pay the interest on the money
| they had to borrow to buy the fancy equipment.
|
| Perhaps what Mr. Cowen is trying to say is that if I don't learn
| how to run the computers of 1982, I won't be able later, even if
| I want to, to learn to run the computers of 1990. Nonsense!
| Knowing how to run a 1982 computer will have little or nothing to
| do with knowing how to run a 1990 computer. And what about the
| children now being born and yet to be born? When they get old
| enough, they will, if they feel like it, learn to run the
| computers of the 1990s.
|
| Well, if they can, then if I want to, I can. From being mostly
| meaningless, or, where meaningful, mostly wrong, these very
| typical words by Mr. Cowen are in method and intent exactly like
| all those ads that tell us that if we don't buy this deodorant or
| detergent or gadget or whatever, everyone else, even our friends,
| will despise, mock, and shun us the advertising industry's attack
| on the fragile self-esteem of millions of people. This using of
| people's fear to sell them things is destructive and morally
| disgusting.
|
| The fact that the computer industry and its salesmen and prophets
| have taken this approach is the best reason in the world for
| being very skeptical of anything they say. Clever they may be,
| but they are mostly not to be trusted. What they want above all
| is not to make a better world, but to join the big list of
| computer millionaires.
|
| A computer is, after all, not a revolution or a way of life but a
| tool, like a pen or wrench or typewriter or car. A good reason
| for buying and using a tool is that with it we can do something
| that we want or need to do better than we used to do it. A bad
| reason for buying a tool is just to have it, in which case it
| becomes, not a tool, but a toy.
|
| On Computers Growing Without Schooling #29 September 1982
|
| by John Holt.
| nicholasjbs wrote:
| I don't agree with your characterization of my post, but I do
| appreciate your sharing this piece (and the fun flashback to
| old, oversized issues of GWS). Thanks for sharing it! Such a
| tragedy that Holt died shortly after he wrote that, I would
| have loved to hear what he thought of the last few decades of
| computing.
| entaloneralie wrote:
| Same, after reading your post, it sent me down reading all
| sorts of guest articles he did left and right, and it really
| made me wonder what he'd think of all this. I feel like his
| views on technology changed over his lifetime. He got more..
| I dunno, cynical over time?
| viccis wrote:
| >author of Unschooling
|
| You say this like it should give him more credibility. He
| created a homeschooling methodology that scores well below
| structured homeschooling in academic evaluations. And that's
| generously assuming it's being practiced in earnest rather than
| my experience with people doing it (effectively just child
| neglect with high minded justification)
|
| I have absolutely no doubt that a quack like John Holt would
| love AI as a virtual babysitter for children.
| JSR_FDED wrote:
| The e-bike analogy in the article is a good one. Paraphrasing:
| Use it if you want to cover distance with low effort. But if your
| goal is fitness then the e-bike is not the way to go.
| viccis wrote:
| It is a good one. I'm going to keep it in my pocket for future
| discussions about AI in education, as I might have some say in
| how a local college builds policy around AI use. My attitude
| has always been that it should be proscribed in any situation
| in which the course is teaching what the AI is doing (Freshman
| writing courses, intro to programming courses, etc.) and that
| it should be used as little as possible for later courses in
| which it isn't as clearly "cheating". My rationale is that, for
| both examples of writing and coding, one of the most useful
| aspects of a four year degree is that you gain a lot from
| constantly exercising these rudimentary skills.
| layer8 wrote:
| The analogy doesn't work too well, in my opinion. An e-bike can
| basically get you with low effort anywhere a regular bike can.
| The same is not true for AI vs. non-AI, in its current state.
| AI is limited in which goals you can reach with it with low
| effort, and using AI will steer you towards those goals if you
| don't want to expend much effort. There's a quality gradient
| with AI dependent on how much extra effort you want to spend,
| that isn't there in the e-bike analogy of getting from A to B.
| tokioyoyo wrote:
| But there's also something in between, an e-assisted bike,
| which covers a lot of distance, but you still have to put some
| extra effort to it. And helps a bit with fitness so. That's how
| I would categorize AI-assisted coding right now.
| ben-schaaf wrote:
| That's what an E-Bike is. If the motor is doing all of the
| work it's called a motor cycle.
| lazyasciiart wrote:
| There are some that can switch now: pedal and it will
| e-assist you, or just hold the lever and it will run
| without pedaling.
| tonyedgecombe wrote:
| >But if your goal is fitness then the e-bike is not the way to
| go.
|
| If the e-bike is an alternative to a road bike then yes. I'd
| argue that is almost never the case. The people I've spoken to
| are using them as an alternative to driving which is clearly
| beneficial to their fitness.
| audinobs wrote:
| To go with fitness analogies, I think it is like when lifting
| weights was something new but the old guard thought it would
| make you slow for sports.
|
| A ridiculous sentimental idea based on limited observation and
| bias against change that won't age well.
| karussell wrote:
| It is a good analogy, also in the sense that some areas are not
| reachable without an e-bike and that you'll need to be prepared
| differently as you have to plan with charging and bigger weight
| etc.
| __mharrison__ wrote:
| Bad analogy.
|
| I ride about twice as much distance (mountain biking) after I
| got an ebike (per Strava). It's still a great workout.
|
| Sample size one disclaimer...
|
| A better biking analogy that I've used in the past is that it I
| wanted to go ride slick rock and have never ridden before, an
| ebike is not going to prevent me from endoing.
| Karrot_Kream wrote:
| (Full disclosure: I have a lot of respect for RC and have thought
| about applying to attend myself. This will color my opinion.)
|
| I really enjoyed this article. The numerous anecdotes from folks
| at RC was great. In particular thanks for sharing this video of
| voice coding [1].
|
| This line in particular stood out to me that I use to think about
| LLMs myself:
|
| "One particularly enthusiastic user of LLMs described having two
| modes: "shipping mode" and "learning mode," with the former
| relying heavily on models and the latter involving no LLMs, at
| least for code generation."
|
| Sometimes when I use Claude Code I either put it in Plan Mode or
| tell it to not write any code and just rubber duck with it until
| I come up with an approach I like and then just write the code
| myself. It's not as fast as writing the plan with Claude and
| asking it to write the code, but offers me more learning.
|
| [1]: https://www.youtube.com/watch?v=WcpfyZ1yQRA
| foota wrote:
| I really want to spend some time at the Recurse Center, but the
| opportunity cost feels so high
| betterhealth12 wrote:
| right now, the opportunity cost is probably as high as it's
| ever been (unrelated, but same also applies to people
| considering business school etc). What got you looking into it?
| zoky wrote:
| The problem is that in order to spend time at the Recurse
| Center, you first have to spend time at the Recurse Center.
| maxverse wrote:
| What do you mean?
| lazyasciiart wrote:
| It's a joke about recursion.
| pyb wrote:
| In what sense?
| fragmede wrote:
| In the sense that you have to take a month off the rest of
| your life to go there. What about my job and my friends and
| my family and my house? Those don't stop existing and
| happening, and leaving them for a month is too difficult for
| people who just aren't committed enough. It's a decent
| filter/gatekeep for "who actually cares enough to do this?"
| PaulHoule wrote:
| Kinda funny but my current feeling about it is different from a
| lot of people.
|
| I did a lot of AI assisted coding this week and I felt, if
| anything, it wasn't faster but it led to higher quality.
|
| I would go through discussions about how to do something, it
| would give me a code sample, I would change it a bit to "make it
| mine", ask if I got it right, get feedback, etc. Sometimes it
| would use features of the language or the libraries I didn't know
| about before so I learned a lot. With all the rubber ducking I
| thought through things in a lot of depth and asked a lot of
| specific questions and usually got good answers -- I checked a
| lot of things against the docs. It would help a lot if it could
| give me specific links to the docs and also specific links to
| code in my IDE.
|
| If there is some library that I'm not sure how to use I will load
| up the source code into a fresh copy of the IDE and start asking
| questions in _that_ IDE, not the one with my code. Given that it
| can take a lot of time to dig through code and understand it,
| having an unreliable oracle can really speed things up. So I don
| 't see it as a way to gets things done quickly, but like pairing
| with somebody who has very different strengths and weaknesses
| from me, and like pair programming, you get better quality. This
| week I walked away with an implementation that I was really happy
| with and I learned more than if I'd done all the work myself.
| dbtc wrote:
| > It would help a lot if it could give me specific links to the
| docs
|
| Just a super quick test: "what are 3 obscure but useful
| features in python functools. Link to doc for each."
|
| GPT 4o gave good links with each example.
|
| (its choices were functools.singledispatch,
| functools.total_ordering, functools.cached_property)
|
| Not sure about local code links.
| steveklabnik wrote:
| I've had this return great results, and I've also had this
| return hallucinated ones.
|
| This is one area where MCPs might actually be useful,
| https://context7.com/ being one of them. I haven't given it
| enough of a shot yet, though.
| furyofantares wrote:
| This is great, it's so easy to get into the "go fast" mode that
| this potential gets overlooked a lot.
| andy99 wrote:
| > I did a lot of AI assisted coding this week
|
| Are you new to it? There's a pretty standard arc that starts
| with how great it is and ends with all the "giving up on AI"
| blog posts you see.
|
| I went through it to. I still use a chatbot as a better stack
| overflow, but I've stopped actually having AI write any code I
| use - it's not just the quality, it's the impact on my thinking
| and understanding that ultimately doesn't improve outcomes over
| just doing it myself.
| PaulHoule wrote:
| I've been doing it for a while. I never really liked stack
| overflow though it always seemed like a waste of time versus
| learning how to look up the real answers in the
| documentation. I never really liked agents because they go
| off for 20 minutes and come back with complete crap But if I
| can ask a question get an answer in 20 seconds and iterate
| again I find that's pretty efficient.
|
| I've usually been skeptical about people who get unreasonably
| good results and not surprised when they wake up a few weeks
| later and are disappointed. One area where I am consistently
| disappointed is when there are significant changes across
| versions: I had one argue over where I could write a switch
| in Java that catches null (you can in JDK 21) and lots of
| trouble with SQLAlchemy in Python which changed a lot between
| versions. I shudder to think what would happen if you asked
| questions about react-router but actually I shudder to think
| about react-router at all.
| resonious wrote:
| I've been back and forth, and currently heavily relying on
| AI-written code. It all depends on knowing what the AI can
| and can't do ahead of time. And what it _can_ do often
| overlaps with grunt work that I don 't enjoy doing.
| whynotminot wrote:
| When's the last time you "went through the loop" ? I feel
| like with this stuff I have to update my priors about every
| three or four months.
|
| I've been using AI regularly since GPT 4 first came out a
| couple years ago. Over that time, various models from Sonnet
| to Gemini to 4o have generally been good rubber ducks. Good
| to talk to and discuss approaches and tradeoffs, and better
| in general than google + stack overflow + pouring over
| verbose documentation.
|
| But I couldn't really "hand the models the wheel." They
| weren't trustworthy enough, easily lost the plot, failed to
| leverage important context right in front of them in the
| codebase, etc. You could see that there was potential there,
| but it felt pretty far away.
|
| Something changed this spring. Gemini 2.5 Pro, Claude 4
| models, o3 and o4-mini -- I'm starting to give the models the
| wheel now. They're good. They understand context. They
| understand the style of the codebase. And they of course
| bring the immense knowledge they've always had.
|
| It's eerie to see, and to think about what comes with the
| next wave of models coming very soon. And if the last time
| you really gave model-driven programming a go was 6 months or
| more ago, you probably have no idea what's about to happen.
| andy99 wrote:
| Interesting point, I agree that things change so fast that
| experience from a few months ago is out of date. I'm
| sceptical there has been a real step change (especially
| based on the snippets I see claude 4 writing in answer to
| questions) but it never hurts to try again.
|
| My most recent stab at this was Claude code with 3.7, circa
| March this year.
|
| To be fair though, a big part of the issue for me is that
| having not done the work or properly thought through how a
| project is structured and how the code works, it comes back
| to bite later. A better model doesn't change this.
| whynotminot wrote:
| If you give it another try, my goto right now is Sonnet 4
| Thinking. There's a pretty massive difference in
| intelligence by switching from just plain 4 to 4
| Thinking. It's still pretty fast, and I think hits the
| right balance between speed and useful intelligence.
|
| However, at least in my experience, nothing beats o3 for
| raw intelligence. It's a little too slow to use as the
| daily driver though.
|
| It's kind of fun seeing the models all have their various
| pros and cons.
|
| > To be fair though, a big part of the issue for me is
| that having not done the work or properly thought through
| how a project is structured and how the code works, it
| comes back to bite later. A better model doesn't change
| this.
|
| Yes, even as I start to leverage the tools more, I try to
| double down on my own understanding of the problem being
| solved, at least at a high level. Need to make sure you
| don't lose the plot yourself.
| theptip wrote:
| There has been a big change with Claude 4.0 in my
| opinion. Probably depends on your environment, but it's
| the first time I've been able to get hundreds of lines of
| Python that just works when vibe coding a new project.
|
| It's still slower going as the codebase increases in
| size, and this is my hypothesis for the huge variance; I
| was getting giddy at how fast I blew through the first 5
| hours of a small project (perhaps in 30 mins with Claude)
| but quickly lost velocity when I started implementing
| tests and editing existing code.
| ryandrake wrote:
| Just one person's opinion: I can't get into the mode of
| programming where you "chat" with something and have it
| build the code. By the time I have visualized in my head
| and articulated into English what I want to build and the
| data structures and algorithms I need, I might as well just
| type the code in myself. That's the only value I've found
| from AI: It's a great autocomplete as you're typing.
|
| To me, programming is a solo activity. "Chatting" with
| someone or something as I do it is just a distraction.
| whynotminot wrote:
| A good part of my career has been spent pair programming
| in XP-style systems, so chatting away with someone about
| constraints, what we're trying to do, what we need to
| implement, etc, might come a bit more naturally to me. I
| understand your perspective though.
| skydhash wrote:
| That may be one of the reason for the conflict of
| opinions. I usually build the thing mentally first, then
| code it, and then verify it. With tools like linters and
| tests, the shorter feedback make the process faster And
| editor fluency is a good boost.
|
| By the time, I'm about to prompt, I usually have enough
| information to just code it away. Coding is like riding a
| bicycle downhill. You just pay enough focus to ride it.
| It's like how you don't think about the characters and
| the words when you're typing. You're mostly thinking
| about what you want to say.
|
| When there's an issue, I switch from coding to reading
| and thinking. And while the latter is mentally taxing, it
| is fast as I don't have to spell it out. And a good
| helper to that is a repository of information. Bookmarks
| to docs, documentation browser, code samples,.. By the
| times the LLM replies with a good enough paragraph, I'm
| already at the Array page on MDN.
| lazyasciiart wrote:
| Having it write unit tests has been one place it is
| reliably useful for me. Easily verifiable that it covers
| everything I'd thought of, but enough typing involved
| that it is faster than doing it myself - and sometimes it
| includes one I hadn't thought of.
| skydhash wrote:
| I've read somewhere, IIRC, that you moslty need to test
| three things: Correct input, incorrect input, and input
| that are on the fence between the two. By doing some
| intersection stuff (with the set of parameters, behavior
| that are library dependent), you mostly have a few things
| left to test. And the actual process of deciding on which
| case to test is actually important as that is how you
| highlight edge cases and incorrect assumptions.
|
| Also writing test cases is how you experience the pain of
| having things that should not be coupled together. So you
| can go refactor stuff instead of having to initialize the
| majority of your software.
| lazyasciiart wrote:
| If you need to hand write tests to think through all that
| then sure, don't let an AI do it for you.
| tempodox wrote:
| That's my basic doubt, too. When developing software, I'm
| translating the details, nuances and complexities of the
| requirements into executable code. Adding another stage
| to this process is just one more opportunity for things
| to get lost in translation. Also, getting an LLM to
| generate the right code would require something else than
| the programming languages we know.
| nicwolff wrote:
| I'm not chatting with the LLM - I'm giving one LLM in
| "orchestrator mode" a detailed description of my required
| change, plus a ton of "memory bank" context about the
| architecture of the app, the APIs it calls, our coding
| standards, &c. Then it uses other LLMs in "architect
| mode" or "ask mode" to break out the task into subtasks
| and assigns them to still other LLMs in "code mode" and
| "debug mode".
|
| When they're all done I review the output and either
| clean it up a little and open a PR, or throw it away and
| tune my initial prompt and the memory bank and start
| over. They're just code-generating machines, not real
| programmers that it's worth iterating with - for one
| thing, they won't learn anything that way.
| otabdeveloper4 wrote:
| > just try a newer model bro
|
| Fundamentally none of the issues inherent in LLMs will be
| fixed by increasing parameter count or better weights.
|
| Shilling for the newest model is a dead end, better to
| figure out how to best put LLMs to use despite their
| limitations.
| whynotminot wrote:
| > Fundamentally none of the issues inherent in LLMs will
| be fixed by increasing parameter count or better weights.
|
| Um, ok just disregard my entire post where I talk about
| how my issues with using LLMs for programming were
| literally solved by better models.
| positron26 wrote:
| > There's a pretty standard arc that starts with how great it
| is and ends with all the "giving up on AI" blog posts you
| see.
|
| Wouldn't be shocked if this is related to getting ramped up.
|
| Switch to a language you don't know. How will that affect
| your AI usage? Will it go up or down over time?
|
| Had similar experiences during the SO early days or when
| diving into open source projects. Suddenly you go from being
| stuck in your bubble to learning all kinds of things. Your
| code gets good. The returns diminish and you no longer
| curiously read unless the library is doing something you
| can't imagine.
| benreesman wrote:
| I'm hanging in there with it. The article was remarkably
| dispassionate about something that gets everyone all hot
| around here: easy stuff benefits from it a lot more than hard
| stuff. If your whole world is Node.js? Yeah, not a lot of
| deep water energy discovery geology data mining at exabyte
| scale or whatever. It pushed the barrier to entry on making a
| website with a Supabase backend from whatever it was to
| nearly zero. And I want to be really clear that frontend work
| is some of my favorite work: I love doing the web interface,
| it's visual and interactive and there's an immediacy that's
| addictive, the compiler toolchains are super cool, it's great
| stuff. I worked on a web browser, got some patents for it. I
| like this stuff.
|
| But getting extreme concurrency outcomes or exotic hardware
| outcomes or low latency stuff or heavy numerics or a million
| other things? It's harder, the bots aren't as good. This is
| the divide: it can one shot a web page, even the older ones
| can just smoke a tailwind theme, headshot. A kernel patch to
| speed up a path on my bare metal box in a latency contest?
| Nah, not really.
|
| But I see a lot of promise in the technology even if the sort
| of hype narrative around it seems pretty much false at this
| point: I still use the LLMs a lot. Part of that is that
| search just doesn't work anymore (although... Yandex got a
| lot better recently out of the blue of all things), and part
| of it is that I see enough exciting glimpses of like, if I
| got it hooked up to the right programming language and loaded
| the context right, wow, once in a while it just slams and
| it's kinda rare but frequent enough that I'm really
| interested in figuring out how to reproduce it reliably. And
| I think I'm getting better outcomes a little bit at a time,
| getting it dialed in. Two or three months ago an even with
| claude code would have me yelling at the monitor, now it's
| like, haha, i see you.
| Shorel wrote:
| Some people copy and paste snippets of code without knowing
| what it does, and in a sense, they spread technical debt
| around.
|
| LLMs lower the technical debt spread by the clueless, to a
| lower baseline.
|
| You were part of the clueless, so an LLM improved your code and
| lowered the technical debt you would have spread.
| gerdesj wrote:
| "I would go through discussions about how to do something"
|
| Have you compared that to your normal debugging thought
| processes? I get that you might be given another way to think
| about the problem but another human might be best for that,
| rather than a next token guesser.
|
| I have a devil of a time with my team and wider, the younger
| ones mainly, getting them to pick up a phone instead of sending
| emails or chats or whatever. A voice chat can solve a problem
| within minutes or even seconds instead of the rather childish
| game of email ping pong. I do it myself too (email etc) and I
| even encourage it, despite what I said earlier - effective use
| of comms is a skill but you do need to understand when to use
| each variety.
| x86x87 wrote:
| viewing it as an assistant is the way to go. it's there to help
| you - like an overpowered autocomplete - but not there to think
| for you.
| npinsker wrote:
| Such a thoughtful and well-written article. One of my biggest
| worries about AI is its impact on the learning process of future
| professionals, and this feels like a window into the future,
| hinting at the effect on unusually motivated learners (a tiny
| subset of people overall, of course). I appreciated the even-
| handed, inquisitive tone.
| tqi wrote:
| > Thoughtful, extremely capable programmers disagree on what
| models can do today, and whether or not they're currently useful.
|
| Is anyone John Henry-ing this question and having parallel teams
| build the same product at the same time?
| brunooliv wrote:
| It's a thin line to walk for me, but I feel that the whole "skill
| atrophy" aspect of it is the hardest to not slip into. What I've
| personally liked about these tools is that they give me ample
| room to explore and experiment with different approaches to a
| particular problem because then translating a valid one into "the
| official implementation" is very easy.
|
| I'm a guy who likes to DO to validate assumptions: if there's
| some task about how something should be written concurrently to
| be efficient and then we need some post processing to combine the
| results, etc, etc, well, before Claude Code, I'd write a scrappy
| prototype (think like a single MVC "slice" of all the distinct
| layers but all in a single Java file) to experiment, validate
| assumptions and uncover the unknown unknowns.
|
| It's how I approach programming and always will. I think writing
| a spec as an issue or ticket about something without getting your
| hands dirty will always be incomplete and at odds with reality.
| So I write, prototype and build.
|
| With a "validated experiment" I'd still need a lot of cleaning up
| and post processing in a way to make it production ready. Now
| it's a prompt! The learning is still the process of figuring
| things out and validating assumptions. But the "translation to
| formal code" part is basically solved.
|
| Obviously, it's also a great unblocking mechanism when I'm stuck
| on something be it a complex query or me FEELING an abstraction
| is wrong but not seeing a good one etc.
| seabass wrote:
| > One particularly enthusiastic user of LLMs described having two
| modes: "shipping mode" and "learning mode," with the former
| relying heavily on models and the latter involving no LLMs, at
| least for code generation.
|
| Crazy that I agreed with the first half of the sentence and was
| totally thrown off by the end. To me, "learning mode" is when I
| want the LLM. I'm in a new domain and I might not even know what
| to google yet, what libraries exist, what key words or concepts
| are relevant. That's where an LLM shines. I can see basic generic
| code that's well explained and quickly get the gist of something
| new. Then there's "shipping mode" where quality is my priority,
| and subtle sneaky bugs really ought to be avoided--the kind I
| encounter so often with ai written code.
___________________________________________________________________
(page generated 2025-07-26 23:01 UTC)