[HN Gopher] Developing our position on AI
___________________________________________________________________
Developing our position on AI
Author : jakelazaroff
Score : 128 points
Date : 2025-07-23 19:34 UTC (2 days ago)
(HTM) web link (www.recurse.com)
(TXT) w3m dump (www.recurse.com)
| nicholasjbs wrote:
| (Author here.)
|
| This was a really fascinating project to work on because of the
| breadth of experiences and perspectives people have on LLMs, even
| when those people all otherwise have a lot in common (in this
| case, experienced programmers, all Recurse Center alums, all
| professional programmers in some capacity, almost all in the US,
| etc). I can't think of another area in programming where opinions
| differ this much.
| itwasntandy wrote:
| Thank you Nick.
|
| As a recurse alum (s14 batch 2) I loved reading this. I loved
| my time at recurse and learned lots. This highlight from the
| post really resonates:
|
| " Real growth happens at the boundary of what you can do and
| what you can almost do. Used well, LLMs can help you more
| quickly find or even expand your edge, but they risk creating a
| gap between the edge of what you can produce and what you can
| understand.
|
| RC is a place for rigor. You should strive to be more rigorous,
| not less, when using AI-powered tools to learn, though exactly
| what you need to be rigorous about is likely different when
| using them."
| vouaobrasil wrote:
| > RC is a place for rigor. You should strive to be more rigorous,
| not less, when using AI-powered tools to learn, though exactly
| what you need to be rigorous about is likely different when using
| them.
|
| This brings about an important point for a LOT of tools, which
| many people don't talk about: namely, with a tool as powerful as
| AI, there will always be minority of people with healthy and
| thoughtful attitude towards its use, but a majority who use it
| improperly because its power is too seductive and human beings on
| average are lazy.
|
| Therefore, even if you "strive to be more rigorous", you WILL be
| a minority helping to drive a technology that is just too
| powerful to make any positive impact on the majority. The
| majority will suffer because they need to have an environment
| where they are forced not to cheat in order to learn and have
| basic competence, which I'd argue is far more crucial to a
| society that the top few having a lot of competence.
|
| The individualistic will say that this is an inevitable price for
| freedom, but in practice, I think it's misguided. Universities,
| for example, NEED to monitor the exam room, because otherwise
| cheating would be rampant, even if there is a decent minority of
| students who would NOT cheat, simply because they want to
| maximize their learning.
|
| With such powerful tools as AI, we need to think beyond our
| individualistic tendencies. The disciplined will often tout their
| balanced philosophy as justification for that tool use, such as
| this Recurse post is doing here, but what they are forgetting is
| that by promoting such a philosophy, it brings more legitimacy
| into the use of AI, for which the general world is not capable of
| handling.
|
| In a fragile world, we must take responsibility beyond ourselves,
| and not promote dangerous tools even if a minority can use them
| properly.
|
| This is why I am 100% against AI - no compromise.
| ctoth wrote:
| Wait, you're literally advocating for handicapping everyone
| because some people can't handle the tools as well as others.
|
| "The disciplined minority can use AI well, but the lazy
| majority can't, so nobody gets to use it" I feel like I read
| this somewhere. Maybe a short story?
|
| Should we ban calculators because some students become
| dependent on them? Ban the internet because people use it to
| watch cat videos instead of learning?
|
| You've dressed up "hold everyone back to protect the
| incompetent" as social responsibility.
|
| I never actually thought I would find someone who read Harrison
| Bergeron and said "you know what? let's do that!" But the
| Internet truly is a vast and terrifying place.
| vouaobrasil wrote:
| A rather shallow reply, because I never implied that there
| should be enforced equality. For some reason, I get these
| sorts of "false dichotomy" replies constantly here, where the
| dichotomy is very strong exaggerated. Maybe it's due to the
| computer scientist's constant use of binary, who knows.
|
| Regardless, I only advocate for restricting technologies that
| are too dangerous, much in the same way as atomic weapons are
| highly restricted by people can still own knives and even use
| guns in some circumstances.
|
| I have nothing against the most intelligent using their
| intelligence wisely and doing more than the less intelligent,
| if only wise use is even possible. In the case of AI, I
| submit that it is not.
| ctoth wrote:
| Who decides what technologies are too dangerous? You,
| apparently.
|
| AI isn't nukes - anyone can train a model at home. There's
| no centralized thing to restrict. So what's your actual
| ask? That nobody ever trains a model? That we collectively
| pretend transformers don't exist?
|
| You're dressing up bog-standard tech panic as social
| responsibility. Same reaction to every new technology:
| "This tool might be misused so nobody should have it."
|
| If you can't see the connection between that and Harrison
| Bergeron's "some people excel so we must handicap
| everyone," then you've missed Vonnegut's entire point.
| You're not protecting the weak - you're enforcing
| mediocrity and calling it virtue.
| vouaobrasil wrote:
| > Who decides what technologies are too dangerous? You,
| apparently.
|
| Again, a rather knee-jerk reply. I am opening up the
| discussion, and putting out my opinion. I never said I
| should be God and arbiter, but I do think people in
| general should have a discussion about it, and general
| discussion starts with opinion.
|
| > AI isn't nukes - anyone can train a model at home.
| There's no centralized thing to restrict. So what's your
| actual ask? That nobody ever trains a model? That we
| collectively pretend transformers don't exist?
|
| It should be something to consider. We could stop it by
| spreading a social taboo about it, denigrate the use of
| it, etc. It's possible. Many non techies already hate AI,
| and mob force is not out of the question.
|
| > You're dressing up bog-standard tech panic as social
| responsibility. Same reaction to every new technology:
| "This tool might be misused so nobody should have it."
|
| I don't have that reaction to every new technology
| personally. But I think we should ask the question of
| every new technology, and especially onces that are
| already disrupting the labor market.
|
| > If you can't see the connection between that and
| Harrison Bergeron's "some people excel so we must
| handicap everyone," then you've missed Vonnegut's entire
| point. You're not protecting the weak - you're enforcing
| mediocrity and calling it virtue.
|
| What people call excellent and mediocre these days is
| often just the capacity to be economically over-ruthless,
| rather than contribute any good to society. We already
| have a wealth of ways that people can excel, even if we
| eradicated AI. So there's definitely no limitation on
| intelligent individuals to be excellent, even if we
| destroyed AI. So your argument really doesn't hold.
|
| Edit: my goal isn't to protect the weak. I'd rather have
| everyone protected, including the very intelligent who
| still want to have a place to use their intelligence on
| their own and not be forced to use AI to keep up.
| ben_w wrote:
| > Who decides what technologies are too dangerous? You,
| apparently.
|
| I see takes like this from time to time about everything.
|
| They didn't say that.
|
| As with all similar cases, they're allowed to advocate
| for whatever being dangerous, and you're allowed to say
| it isn't, the people who decide is _all of us
| collectively_ and when we 're at our best we do so on the
| basis of the actual arguments.
|
| > AI isn't nukes - anyone can train a model at home.
|
| (1) They were using an extreme to illustrate the point.
|
| (2) Anyone can make a lot of things at home. I know two
| distinct ways to make a chemical weapon using only things
| I can find in a normal kitchen. That people can do a
| thing at home doesn't make the thing "not prohibited".
| binary132 wrote:
| Hyphenatic phrasing detected. Deploying LLM snoopers.
| usernamed7 wrote:
| Why are you putting down a well reasoned reply as being
| shallow? Isn't that... shallow? Is it because you don't
| want people to disagree with you or point out flaws in your
| arguments? Because you seem to take an absolutist
| black/white approach and disregard any sense of nuanced
| approach.
| vouaobrasil wrote:
| I do want people to argue or point out flaws. But
| presenting a false dichotomy is not a well-reasoned
| reply.
| pyman wrote:
| > even if a minority can use them properly.
|
| Most students today are AI fluent. Most teachers aren't.
| Students treat AI like Google Search, StackOverflow,
| GitHub, and every other dev tool.
| mmcclure wrote:
| _Some_ students treat AI like those things. Others are
| effectively a meat proxy for AI. Both ends of the
| spectrum would call themselves "AI fluent."
|
| I don't think the existence of the latter should mean we
| restrict access to AI for everyone, but I also don't
| think it's helpful to pretend AI is just this
| generation's TI-83.
| Karrot_Kream wrote:
| The rebuttal is very simple. I'll try and make it a bit
| less emotionally charged and clear even if your original
| opinion did not appear to me to go through the same
| process:
|
| "While some may use the tool irresponsibly, others will
| not, and therefore there's no need to restrict the tool.
| Society shouldn't handicap the majority to accommodate
| the minority."
|
| You can choose to not engage with this critique but
| calling it a "false dichotomy" is in poor form. If
| anything, it makes me feel like you're not willing to
| entertain disagreement. You state that you want to start
| a discussion by expressing your opinion but I don't see a
| discussion here. I observe you expressing your opinion
| and dismissing criticism of that opinion as false.
| collingreen wrote:
| I don't have a dog in this fight but I think the counter
| argument was a terrible straw man. Op said it's too
| dangerous to put in general hands. Treating that like
| "protect the incompetent from themselves and punish
| everyone in the process" is badly twisting the point. A
| closer oversimplification is "protect the public from the
| incompetents".
|
| In my mind a direct, good faith rebuttal would address
| the actual points - either disagree that the worst usage
| would lead to harm of the public or make a point (like
| the op tees up) that risking the public is one of worthy
| tradeoffs of freedom.
| tptacek wrote:
| The original post concluded with the sentence "This is
| why I am 100% against AI - no compromise." Not "AI is too
| dangerous for general hands".
| vouaobrasil wrote:
| Second reply to your expanded comment: I think in some cases,
| some technologies are just versions of the prisoner's dilemma
| where no one is really better off with the technology. And
| one must decide on a case by case basis, similar to how the
| Amish decide what is best for their society on a case by case
| basis.
|
| Again, even your expanded reply shrieks with false dichotomy.
| I never said ban every possible technology, only ones that
| are sufficiently dangerous.
| entaloneralie wrote:
| I feel like John Holt, author of Unschooling, who is quoted
| numerous times in the article, would not be too keen on seeing
| his name in a post legitimizes a technology that uses
| inevitabilism to insert itself in all domains of life.
|
| --
|
| "Technology Review," the magazine of MIT, ran a short article in
| January called "Housebreaking the Software" by Robert Cowen,
| science editor of the "Christian Science Monitor," in which he
| very sensibly said: "The general-purpose home computer for the
| average user has not yet arrived.
|
| Neither the software nor the information services accessible via
| telephone are yet good enough to justify such a purchase unless
| there is a specialized need. Thus, if you have the cash for a
| home computer but no clear need for one yet, you would be better
| advised to put it in liquid investment for two or three more
| years." But in the next paragraph he says "Those who would stand
| aside from this revolution will, by this decade's end, find
| themselves as much of an anachronism as those who yearn for the
| good old one-horse shay." This is mostly just hot air.
|
| What does it mean to be an anachronism? Am I one because I don't
| own a car or a TV? Is something bad supposed to happen to me
| because of that? What about the horse and buggy Amish? They are,
| as a group, the most successful farmers in the country,
| everywhere buying up farms that up-to-date high-tech farmers have
| had to sell because they couldn't pay the interest on the money
| they had to borrow to buy the fancy equipment.
|
| Perhaps what Mr. Cowen is trying to say is that if I don't learn
| how to run the computers of 1982, I won't be able later, even if
| I want to, to learn to run the computers of 1990. Nonsense!
| Knowing how to run a 1982 computer will have little or nothing to
| do with knowing how to run a 1990 computer. And what about the
| children now being born and yet to be born? When they get old
| enough, they will, if they feel like it, learn to run the
| computers of the 1990s.
|
| Well, if they can, then if I want to, I can. From being mostly
| meaningless, or, where meaningful, mostly wrong, these very
| typical words by Mr. Cowen are in method and intent exactly like
| all those ads that tell us that if we don't buy this deodorant or
| detergent or gadget or whatever, everyone else, even our friends,
| will despise, mock, and shun us the advertising industry's attack
| on the fragile self-esteem of millions of people. This using of
| people's fear to sell them things is destructive and morally
| disgusting.
|
| The fact that the computer industry and its salesmen and prophets
| have taken this approach is the best reason in the world for
| being very skeptical of anything they say. Clever they may be,
| but they are mostly not to be trusted. What they want above all
| is not to make a better world, but to join the big list of
| computer millionaires.
|
| A computer is, after all, not a revolution or a way of life but a
| tool, like a pen or wrench or typewriter or car. A good reason
| for buying and using a tool is that with it we can do something
| that we want or need to do better than we used to do it. A bad
| reason for buying a tool is just to have it, in which case it
| becomes, not a tool, but a toy.
|
| On Computers Growing Without Schooling #29 September 1982
|
| by John Holt.
| nicholasjbs wrote:
| I don't agree with your characterization of my post, but I do
| appreciate your sharing this piece (and the fun flashback to
| old, oversized issues of GWS). Thanks for sharing it! Such a
| tragedy that Holt died shortly after he wrote that, I would
| have loved to hear what he thought of the last few decades of
| computing.
| viccis wrote:
| >author of Unschooling
|
| You say this like it should give him more credibility. He
| created a homeschooling methodology that scores well below
| structured homeschooling in academic evaluations. And that's
| generously assuming it's being practiced in earnest rather than
| my experience with people doing it (effectively just child
| neglect with high minded justification)
|
| I have absolutely no doubt that a quack like John Holt would
| love AI as a virtual babysitter for children.
| JSR_FDED wrote:
| The e-bike analogy in the article is a good one. Paraphrasing:
| Use it if you want to cover distance with low effort. But if your
| goal is fitness then the e-bike is not the way to go.
| viccis wrote:
| It is a good one. I'm going to keep it in my pocket for future
| discussions about AI in education, as I might have some say in
| how a local college builds policy around AI use. My attitude
| has always been that it should be proscribed in any situation
| in which the course is teaching what the AI is doing (Freshman
| writing courses, intro to programming courses, etc.) and that
| it should be used as little as possible for later courses in
| which it isn't as clearly "cheating". My rationale is that, for
| both examples of writing and coding, one of the most useful
| aspects of a four year degree is that you gain a lot from
| constantly exercising these rudimentary skills.
| layer8 wrote:
| The analogy doesn't work too well, in my opinion. An e-bike can
| basically get you with low effort anywhere a regular bike can.
| The same is not true for AI vs. non-AI, in its current state.
| AI is limited in which goals you can reach with it with low
| effort, and using AI will steer you towards those goals if you
| don't want to expend much effort. There's a quality gradient
| with AI dependent on how much extra effort you want to spend,
| that isn't there in the e-bike analogy of getting from A to B.
| Karrot_Kream wrote:
| (Full disclosure: I have a lot of respect for RC and have thought
| about applying to attend myself. This will color my opinion.)
|
| I really enjoyed this article. The numerous anecdotes from folks
| at RC was great. In particular thanks for sharing this video of
| voice coding [1].
|
| This line in particular stood out to me that I use to think about
| LLMs myself:
|
| "One particularly enthusiastic user of LLMs described having two
| modes: "shipping mode" and "learning mode," with the former
| relying heavily on models and the latter involving no LLMs, at
| least for code generation."
|
| Sometimes when I use Claude Code I either put it in Plan Mode or
| tell it to not write any code and just rubber duck with it until
| I come up with an approach I like and then just write the code
| myself. It's not as fast as writing the plan with Claude and
| asking it to write the code, but offers me more learning.
|
| [1]: https://www.youtube.com/watch?v=WcpfyZ1yQRA
| foota wrote:
| I really want to spend some time at the Recurse Center, but the
| opportunity cost feels so high
| betterhealth12 wrote:
| right now, the opportunity cost is probably as high as it's
| ever been (unrelated, but same also applies to people
| considering business school etc). What got you looking into it?
| zoky wrote:
| The problem is that in order to spend time at the Recurse
| Center, you first have to spend time at the Recurse Center.
| pyb wrote:
| In what sense?
| PaulHoule wrote:
| Kinda funny but my current feeling about it is different from a
| lot of people.
|
| I did a lot of AI assisted coding this week and I felt, if
| anything, it wasn't faster but it led to higher quality.
|
| I would go through discussions about how to do something, it
| would give me a code sample, I would change it a bit to "make it
| mine", ask if I got it right, get feedback, etc. Sometimes it
| would use features of the language or the libraries I didn't know
| about before so I learned a lot. With all the rubber ducking I
| thought through things in a lot of depth and asked a lot of
| specific questions and usually got good answers -- I checked a
| lot of things against the docs. It would help a lot if it could
| give me specific links to the docs and also specific links to
| code in my IDE.
|
| If there is some library that I'm not sure how to use I will load
| up the source code into a fresh copy of the IDE and start asking
| questions in _that_ IDE, not the one with my code. Given that it
| can take a lot of time to dig through code and understand it,
| having an unreliable oracle can really speed things up. So I don
| 't see it as a way to gets things done quickly, but like pairing
| with somebody who has very different strengths and weaknesses
| from me, and like pair programming, you get better quality. This
| week I walked away with an implementation that I was really happy
| with and I learned more than if I'd done all the work myself.
| dbtc wrote:
| > It would help a lot if it could give me specific links to the
| docs
|
| Just a super quick test: "what are 3 obscure but useful
| features in python functools. Link to doc for each."
|
| GPT 4o gave good links with each example.
|
| (its choices were functools.singledispatch,
| functools.total_ordering, functools.cached_property)
|
| Not sure about local code links.
| steveklabnik wrote:
| I've had this return great results, and I've also had this
| return hallucinated ones.
|
| This is one area where MCPs might actually be useful,
| https://context7.com/ being one of them. I haven't given it
| enough of a shot yet, though.
| furyofantares wrote:
| This is great, it's so easy to get into the "go fast" mode that
| this potential gets overlooked a lot.
| andy99 wrote:
| > I did a lot of AI assisted coding this week
|
| Are you new to it? There's a pretty standard arc that starts
| with how great it is and ends with all the "giving up on AI"
| blog posts you see.
|
| I went through it to. I still use a chatbot as a better stack
| overflow, but I've stopped actually having AI write any code I
| use - it's not just the quality, it's the impact on my thinking
| and understanding that ultimately doesn't improve outcomes over
| just doing it myself.
___________________________________________________________________
(page generated 2025-07-25 23:00 UTC)