[HN Gopher] AI Angst
       ___________________________________________________________________
        
       AI Angst
        
       Author : AndrewDucker
       Score  : 159 points
       Date   : 2025-06-09 10:10 UTC (12 hours ago)
        
 (HTM) web link (www.tbray.org)
 (TXT) w3m dump (www.tbray.org)
        
       | strict9 wrote:
       | Angst is the best way to put it.
       | 
       | I use AI every day, I feel like it makes me more productive, and
       | generally supportive of it.
       | 
       | But the angst is something else. When nearly every tech related
       | startup seems to be about making FTEs redundant via AI it leaves
       | me with a bad feeling for the future. Same with the impact on
       | students and learning.
       | 
       | Not sure where we go from here. But this feels spot on:
       | 
       |  _> I think that the best we can hope for is the eventual
       | financial meltdown leaving a few useful islands of things that
       | are actually useful at prices that make sense._
        
         | bob1029 wrote:
         | I agree that some kind of meltdown/crash would be the best
         | possible thing to happen. There are too many players not adding
         | any value to the ecosystem at this point. MCP is a great
         | example of this - Complexity merchants inventing new markets to
         | operate in. We need something severe to scare off the bullshit
         | artists for a while.
         | 
         | How many civil engineering projects could we have completed
         | ahead of schedule and under budget if we applied the same
         | amount of wild-eyed VC and genius tier attention to the
         | problems at hand?
        
           | pzo wrote:
           | MCP is now only used by really power users and mostly only in
           | software dev settings but I see them used by users in the
           | future. There is no decent mcp client for non tech savvy
           | users yet. But I think if browsers will have build in better
           | implementation of them they will be used. Think what
           | perplexity comet or browser company dia trying to do. It's
           | still very early for MCP.
        
         | fellowniusmonk wrote:
         | All the angst is 100% manufactured by policy, LLMs wouldn't be
         | hated if it didn't dovetail with the end of ZIRP, Section 174
         | specifically targeting engineer roles to be tax losers so
         | others could be other tax winners, Macro Economic Uncertainty
         | (which compounds the problems of 174.)
         | 
         | If ours roles hadn't been specifically targeted by government
         | policy for reduction as a way to buoy government revenues and
         | prop up the budgetary bottom line, in the face of decreasing
         | taxes for favored parties.
         | 
         | This is simply policy induced multifactorial collapse.
         | 
         | And LLMs get to take the blame from engineers because that is
         | the excuse being used. Pretty much every old school hacker who
         | has played around with them recognizes that LLMs are impressive
         | and sci-fi, it's like my childhood dream come true for
         | interface design.
         | 
         | I cannot begin to say how fucking stupid the people in charge
         | of these policies are, I'm an old head, I know exactly the type
         | of 80s executives that actively likes to see the nerds suffer
         | because we're all irritating poindexters to them.
         | 
         | The pattern of actively attacking the freedoms and sabotaging
         | incomes of knowledge workers is not remotely a rare pattern,
         | and it's often done this stupidly and at the expense of an
         | countries economic footing and ability to innovate.
        
       | perplex wrote:
       | > I really don't think there's a coherent pro-genAI case to be
       | made in the education context
       | 
       | My own personal experience is that Gen AI is an amazing tool to
       | support learning, when used properly.
       | 
       | Seems likely there will be changes in higher education to work
       | with gen AI instead of against it, and it could be a positive
       | change for both teachers and students.
        
         | Aperocky wrote:
         | Computer and internet has been around for 20 years and yet the
         | evaluation systems of our education has largely remained the
         | same.
         | 
         | I don't hold my breath on this.
        
           | icedchai wrote:
           | Where are you located? The Internet boom in the US happened
           | in the mid-90's. My first part-time ISP job was in 1994.
        
             | dowager_dan99 wrote:
             | dial-up penetration in the mid-90's was still very thin,
             | and high-speed access limited to universities and the
             | biggest companies. Here's the numbers ChatGPT found for me:
             | 
             | * 1990s: Internet access was rare. By 1995, only 14% of
             | Americans were online.
             | 
             | * 2000: Approximately 43% of U.S. households had internet
             | access .
             | 
             | * 2005: The number increased to 68% .
             | 
             | * 2010: Around 72% of households were connected .
             | 
             | * 2015: The figure rose to 75% .
             | 
             | * 2020: Approximately 93% of U.S. adults used the internet,
             | indicating widespread household access .
        
               | icedchai wrote:
               | Yes, it was thin, but 1995 - 96 was when "Internet" went
               | mainstream. Depending on your area, you could have
               | several dialup ISP options. Major metros like Boston had
               | dozens. I remember hearing ISP ads on the radio!
               | 
               | 1995 was when Windows 95 launched, and with its built in
               | dialup networking support, allowed a "normal" person to
               | easily get online. 1995 was the Netscape IPO, which
               | kicked off the dot-com bubble. 1995 was when Amazon first
               | launched their site.
        
         | murrayb wrote:
         | I think he is talking education as in school/college/university
         | rather than learning?
         | 
         | I too am finding AI incredibly useful for learning, I use it
         | for high level overviews and to help guide me to resources
         | (online formats and books) deeper dives. Claude has so far
         | proven to be an excellent learning partner, no doubt other
         | models are similarly good.
        
           | strict9 wrote:
           | That is my take. Continuing education via prompt is great, I
           | try to do it every day. Despite years of use I still get that
           | magic feeling when asking about some obscure topic I want to
           | know more about.
           | 
           | But that doesn't mean I think my kids should primarily get
           | K-12 and college education this way.
        
         | SkyBelow wrote:
         | The issue with education in particular is a much deeper issue
         | which gen AI has ripped bandages off and exposed the wound to
         | the world, while also greatly accelerating its decay, but it
         | was not responsible for creating it.
         | 
         | What is the purpose of education? Is it to learn, or to gain
         | credentials that you have learned? Too much of education has
         | become the latter, to the point we have sacrificed the former.
         | Eventually this brings down both, as a degree gains a
         | reputation of no longer signifying the former ever happened.
         | 
         | Or existing systems that check for learning before granting the
         | degree that showed an individual learned were largely not ready
         | for the impact of genAI and teachers and professors have
         | adapted poorly. Sometimes due to lack of understanding the
         | technology, often due to their hands being tied.
         | 
         | GenAI used to cheat is a great detriment to education, but a
         | student using genAI to learn can benefit greatly, as long as
         | they have matured enough in their education process to have
         | critical thinking to handle mishaps by the AI and to properly
         | differentiate when they are learning and when they are having
         | the AI do the work for them (I don't say cheat here because
         | some students will accidentally cross the line and 'cheat'
         | often carries a hint of mens rea). To the mature enough student
         | interested in learning more, genAI is a worthwhile tool.
         | 
         | How do we handle those who use it to cheat? How do we handle
         | students who are too immature in their education journey to use
         | the tool effectively? Are we ready to have a discussion about
         | those learning who only care for the degree and the education
         | to earn the degree is just seen as a means to an end? How to
         | teachers (and increasingly professors) fight back against the
         | pressure of systems that optimize on granting credentials and
         | which just assume the education will be behind those systems
         | (Goodhart's Law anyone)? Those questions don't exist because of
         | genAI, but genAI greatly increased our need to answer them.
        
         | jplusequalt wrote:
         | >Seems likely there will be changes in higher education to work
         | with gen AI instead of against it, and it could be a positive
         | change for both teachers and students.
         | 
         | Since we're using anecdotes, let me leave one as well--it's
         | been my experience that humans choose the path of least
         | resistance. In the context of education, I saw a large
         | percentage of my peers during K-12 do the bare minimum to get
         | by in the classes, and in college I saw many resorting to Chegg
         | to cheat on their assignments/tests. In both cases I believe it
         | was the same motivation--half-assing work/cheating takes less
         | effort and time.
         | 
         | Now, what happens when you give those same children access to
         | an LLM that can do essentially ALL their work for them? If I'm
         | right, those children will increasingly lean on those LLMs to
         | do as much of their schoolwork/homework as possible, because
         | the alternative means they have less time to scroll on Tik Tok.
         | 
         | But wait, this isn't an anecdote, it's already happening!
         | Here's an excellent article that details the damage these tools
         | are already causing to our students
         | https://www.404media.co/teachers-are-not-ok-ai-chatgpt/.
         | 
         | >[blank] is an amazing tool ... when used properly
         | 
         | You could say the same thing about a myriad of controversial
         | things that currently exist. But we don't live in a perfect
         | world--we live in a world where money is king, and often times
         | what makes money is in direct conflict with utilitarianism.
        
           | ryandrake wrote:
           | > Now, what happens when you give those same children access
           | to an LLM that can do essentially ALL their work for them? If
           | I'm right, those children will increasingly lean on those
           | LLMs to do as much of their schoolwork/homework as possible,
           | because the alternative means they have less time to scroll
           | on Tik Tok.
           | 
           | I think schools are going to have to very quickly re-evaluate
           | their reliance on "having done homework" and using essays as
           | evidence that a student has mastered a subject. If an LLM can
           | easily do something, then that thing is no longer measuring
           | anything meaningful.
           | 
           | A school's curriculum should be created assuming LLMs exist
           | and that students will always use them to bypass make-work.
        
             | jplusequalt wrote:
             | >A school's curriculum should be created assuming LLMs
             | exist and that students will always use them to bypass
             | make-work
             | 
             | Okay, how do they go about this?
             | 
             | Schools are already understaffed as is, how are the
             | teachers suddenly going to have time to revamp the entire
             | educational blueprint? Where is the funding for this
             | revolution in education going to come from when we've just
             | slashed the Education fund?
        
               | ryandrake wrote:
               | I'm not an educator, so I honestly have no idea. The
               | world has permanently changed though... we can't put the
               | toothpaste back into the tube. Any student, with a few
               | bucks and a few keystrokes, can instantly solve written
               | homework assignments and generate an any-number-of-words
               | essay about any topic. _Something_ needs to change in the
               | education process, but who knows what it will end up
               | looking like?
        
               | usefulcat wrote:
               | I would think that at least part of the solution would
               | have to involve having students do more work at school
               | instead of as homework.
        
               | jplusequalt wrote:
               | Okay, and how do you make room for that when there's
               | barely enough time to teach the curriculum as is?
        
         | dowager_dan99 wrote:
         | >> an amazing tool to support learning, when used properly.
         | 
         | how can kids, think K-12, who don't even know how to "use" the
         | internet properly - or even their phones - learn how to learn
         | with AI? The same way social media and mobile apps made the
         | internet easy, mindless clicking, LLMs make school a mechanical
         | task. It feels like your argument is similar to LLMs helping
         | experienced, senior developers code more effectively, while
         | eliminating many chances to grow the skills needed to join that
         | group. Sounds like you already know how to learn and use AI to
         | enhance that. My 12-yr-old is not there yet and may never get
         | there.
        
           | rightbyte wrote:
           | > My 12-yr-old is not there yet and may never get there.
           | 
           | Wouldn't class room exams enforce that though? Like,
           | imagining LLMs like an older sibling or parent that would
           | help pupils cheat on essays.
        
           | lonelyasacloud wrote:
           | >> how can kids, think K-12, who don't even know how to "use"
           | the internet properly - or even their phones - learn how to
           | learn with AI?
           | 
           | For every person/child that just wants the answer there will
           | be at least some that will want to know why. And these
           | endlessly patient machines are very good at feeding that
           | curiosity.
        
             | jplusequalt wrote:
             | >For every person/child that just wants the answer there
             | will be at least some that will want to know why
             | 
             | You're correct, but let's be honest here, the majority will
             | use it as a means to get their homework over and done with
             | so they can return to Tik Tok. Is that the society we want
             | to cultivate?
             | 
             | >And these endlessly patient machines are very good at
             | feeding that curiosity
             | 
             | They're also very good at feeding you factually incorrect
             | information. In comparison, a textbook was crafted by
             | experts in their field, and is often fact checked by many
             | more experts before it becomes published.
        
       | sovietmudkipz wrote:
       | Minor off-topic quibble about streams: I've been learning about
       | network programming for realtime multiplayer games, specifically
       | about input and output streams. I just want to voice that the
       | names are a bit confusing due to the perspective I adopt when I
       | think about them.
       | 
       | Input stream = output from the perspective of the consumer.
       | Things come out of this stream that I can programmatically react
       | to. Output stream = input from the perspective of the producer.
       | This is a stream you put stuff into.
       | 
       | ...so when this article starts "My input stream is full of it..."
       | the author is saying they're seeing output of fear and angst in
       | their feeds.
       | 
       | Am I alone in thinking this is a bit unintuitive?
        
         | nemomarx wrote:
         | I think an input stream is input from the perspective of the
         | consumer? Like it's things you are consuming or taking as
         | inputs. Output is things you emit.
         | 
         | Your input is ofc someone else's output, and vice versa, but
         | you want to keep your description and thoughts to one
         | perspective, and in a first person blog that's clearly the
         | authors pov, right?
        
       | absurdo wrote:
       | Poor HN.
       | 
       | Is there a glimpse of the next hype train we can prepare to board
       | once AI gets dulled down? This has basically made the site
       | unusable.
        
         | acedTrex wrote:
         | It has made large parts of the internet and frankly previously
         | solid tools and products unusable.
         | 
         | Just look at the Github product being transformed into absolute
         | slop central its wild. Github universe was exclusively focused
         | on useless LLM additions.
        
           | gh0stcat wrote:
           | I'm interested to see what the landscape of public code will
           | look like in the next few years, with sites like
           | StackOverflow dropping off and discussions moving to discord,
           | plus code generation flooding github, writing your own high
           | quality code in the open might become very valuable.
        
             | acedTrex wrote:
             | I am very bearish on that idea to be honest, I think the
             | field will stagnate.
        
               | rightbyte wrote:
               | Giving away secret sauce for free is not the way of the
               | new guilded era.
        
         | yoz-y wrote:
         | At the moment 6 out of 30 front page articles are about AI.
         | That's honesty quite okay.
        
           | lagniappe wrote:
           | I use something called the Rust Index, where I compare a term
           | or topic to the number of posts with "written in Rust" in the
           | title.
        
             | absurdo wrote:
             | C-can we get an open source of this?
             | 
             | Is it written in Rust?
        
             | steveklabnik wrote:
             | HN old timers would call this it Erlang Index.
        
               | lagniappe wrote:
               | I was just thinking about you.
        
         | ManlyBread wrote:
         | My sentiments exactly, lately browsing HN feels like a sales
         | pitch for LLMs, complete with the same snark about "luddites"
         | and promises of future glory I remember back when NFTs were the
         | hot new thing in tech. Two more weeks I guess.
        
           | Kiro wrote:
           | NFTs had zero utility but even the most anti AI posts are now
           | "ok, AI can be useful but what are the costs?". It's clearly
           | something different.
        
           | whynotminot wrote:
           | Really? I feel like hackernews is so anti-AI I go to other
           | places for the latest. Anything posted here gets destroyed by
           | cranky programmers desperately hoping this is just a fad.
        
           | tptacek wrote:
           | I share this complaint, for what it's worth.
        
         | layer8 wrote:
         | Anti-aging is an evergreen.
        
       | piker wrote:
       | > I really don't think there's a coherent pro-genAI case to be
       | made in the education context
       | 
       | I use ChatGPT as an RNG of math problems to work through with my
       | kid sometimes.
        
         | Herring wrote:
         | I used it to generate SQL questions set in real-world
         | scenarios. I needed to pick up joins intuitively, and the
         | websites I could find were pretty dull.
        
       | lowsong wrote:
       | > at the moment I'm mostly in tune with Thomas Ptacek's My AI
       | Skeptic Friends Are All Nuts. It's long and (fortunately) well-
       | written and I (mostly) find it hard to disagree with.
       | 
       | Ptacek has spent the past week getting dunked on in public for
       | that article. I don't think it lends you a lot of credence to
       | align with it.
       | 
       | > If you're interested in that thinking, here's a sample; a slide
       | deck by a Keith Riegert for the book-publishing business which,
       | granted, is a bit stagnant and a whole lot overconcentrated these
       | days. I suspect scrolling through it will produce a strong
       | emotional reaction for quite a few readers here. It's also useful
       | in that it talks specifically about costs.
       | 
       | You're not wrong here. I read the deck and the word that comes to
       | mind is "disgusting". Then again, the morally bankrupt have
       | always done horrible things to make a quick buck -- AI is no
       | different.
        
         | icedchai wrote:
         | Getting "dunked" only means it's controversial, not necessarily
         | wrong. Developers who don't embrace AI tools are going to get
         | left behind.
        
           | bgwalter wrote:
           | Sure, tptacek will outprogram all of us. With his two GitHub
           | repositories, one of which is a POC.
        
             | icedchai wrote:
             | Have you tried any of the tools, like Cursor or Zed? They
             | increase productivity _if you use them correctly._ If you
             | give them quality inputs like well written, spec-like
             | prompts, instruct them to work in phases, provide feedback
             | on testing, the results can be very, very good.
             | Unsurprisingly, this is similar to what you need to give to
             | a human to also get positive results.
        
           | kiitos wrote:
           | Then maybe replace "getting dunked on" with "getting ratio'd"
           | -- underlying point is the same, the post was a bad take.
        
             | icedchai wrote:
             | What was bad about it? Everything he wrote all sounded very
             | pragmatic to me.
        
             | tptacek wrote:
             | To be fair, you had the same response to Kenton Varda's
             | post about using Claude Code to build an OAuth component
             | for Cloudflare, to the point of calling his work just a
             | tiny step away from "vibe coding".
        
               | kiitos wrote:
               | I called that project one step away from vibe coding,
               | which I stand behind -- 'tiny' is your editorializing.
               | But his thing wasn't as summarily dunked-on, or ratio'd,
               | or however you want to call it, as your thing was, I
               | don't think! ;)
        
               | tptacek wrote:
               | I don't feel like I got "ratio'd" at all? I'd say the
               | response broke down roughly 50/50, as I expected it to. I
               | got "dunked on" here yesterday for suggesting that
               | userland TCP/IP stacks were a good idea; I'm not all that
               | sensitive to "dunking".
        
           | lowsong wrote:
           | > Getting "dunked" only means it's controversial, not
           | necessarily wrong.
           | 
           | It undermines the author's position of being "moderate" if
           | they align with perhaps the most decisive and aggressively
           | written pro-AI puff piece doing the rounds.
           | 
           | > Developers who don't embrace AI tools are going to get left
           | behind.
           | 
           | I'm not sure how to respond to this. I am doubtful a comment
           | on Hacker News will change your mind, but I'd ask you to
           | think about two questions.
           | 
           | If AI is going to be as revolutionary in our industry as
           | other changes of the past, like web or mobile, then how would
           | a similar statement sound around those? Is saying "Developers
           | who don't embrace mobile development are going to get left
           | behind" a sensible statement? I don't think so, even with how
           | huge mobile has been. Same with other big shifts. "Developers
           | who don't embrace microservice architecture are going to get
           | left behind"? Maybe more comparable, but equally silly. So,
           | why would it be different than those? Do you think LLM tools
           | are more impactful than any other change in history?
           | 
           | Second, if AI truly as as groundbreakingly revolutionary as
           | you suggest, what happens to us? Maybe you'll call me a
           | luddite, raging against the loss of jobs when confronted with
           | automated looms, but you'll have to forgive me for not
           | welcoming my own destruction with open arms.
        
             | icedchai wrote:
             | I understand your skepticism. I think, in 20 years, when we
             | look back, we'll see this time was the beginning of a
             | fundamental paradigm shift in software development. This
             | will be similar in magnitude to the move from desktop to
             | web development in the 90's. If I told you, in 1996, that
             | "developers who don't embrace web development will be left
             | behind", it would be an accurate statement.
        
             | cloverich wrote:
             | You have to compare it at the right level. A _developer_
             | who did not embrace mobile is fine, because the market
             | _grew_ as a result of mobile. For developers, there were
             | strictly more opportunities to branch out and find work.
             | For _companies_ however, yes, if they failed to embrace
             | mobile many of them absolutely were hard-passed (or lost
             | substantial market share) compared against those who did.
             | Just like those who failed to embrace the internet were
             | hard passed before that.
             | 
             | A more apt comparison might be comparing it to the arrival
             | of IDE's and quality source control? Do you think
             | developers (outside of niche cases) working out of text
             | editors and rsyncing code to production are able to find
             | jobs as easily as those who are well versed in using e.g. a
             | modern language tooling + Github in a team environment?
             | Because I've directly seen many such developers being
             | turned down by screening and interviews; I've seen
             | companies shed talent when they refused to embrace git
             | while clinging to SVN and slow deployment processes; said
             | talent would go on to join companies that were later IPOing
             | in the same space for a billion+ while their former
             | colleagues were laid off. To me it feels quite similar to
             | those moments.
        
       | stevage wrote:
       | I guess we're all trying to figure out where we sit along the
       | continuum from anti-AI Luddite to all-in.
       | 
       | My main issue with vibe coding etc is I simply don't enjoy it.
       | Having a conversation with a computer to generate code that I
       | don't entirely understand and then have to try to review is just
       | not fun. It doesn't give me any of the same kind of intellectual
       | satisfaction that I get out of actually writing code.
       | 
       | I'm happy to use Copilot to auto-complete, and ask a few
       | questions of ChatGPT to solve a pointy TypeScript issue or debug
       | something, but stepping back and letting Claude or something
       | write whole modules for me just feels sloppy and unpleasant.
        
         | Garlef wrote:
         | > Having a conversation with a computer to generate code that I
         | don't entirely understand and then have to try to review is
         | just not fun.
         | 
         | Same for me. But maybe that's ultimately an UX issue? And maybe
         | things will straighten out once we figure out how to REALLY do
         | AI assisted software development.
         | 
         | As an anology: Most people wouldn't want to dig through machine
         | code/compiler output. At least not without proper tooling.
         | 
         | So: Maybe once we have good tools to understand the output it
         | might be fun again.
         | 
         | (I guess this would include advances in
         | structuring/architecting the output)
        
           | bitwize wrote:
           | I think that AI assistance in coding will become enjoyable
           | for me once the technology exists for AI to translate my
           | brainwaves into text. Then I could _think_ my code into
           | computer, greatly speeding the OODA loop of programming.
           | 
           | As it is, giving high-level directives to an LLM and
           | debugging the output seems like a waste of my time and a
           | hindrance to my learning process. But that's how professional
           | coding will be done in the near future. 100% human written
           | code will become like hand-writing a business letter in
           | cursive: something people used to be taught in school, but no
           | one actually does in the real world because it's too time-
           | consuming.
           | 
           | Ultimately, the business world only cares about productivity
           | and what the stopwatch says is faster, not whether you enjoy
           | or learn from the process.
        
           | cratermoon wrote:
           | Tim doesn't address this in his essay, so I'm going to harp
           | on it: "AI will soon be able to...". That phrase is far too
           | load-bearing. The part of AI hype that says, "sure, it's
           | kinda janky now, but this is just the beginning" has been
           | repeated for 3 years now, and everything has been just around
           | the corner the entire time. It's the first step fallacy,
           | saying that if we can build a really tall ladder now, surely
           | we'll soon be able to build a ladder tall enough to reach the
           | moon.
           | 
           | The reality is that we've seen incremental and diminishing
           | returns, and the promises haven't been met.
        
             | tptacek wrote:
             | _Diminishing_ returns? Am I reading right that you believe
             | the last 6 months has been marked by a _decrease_ in the
             | capability of these systems?
        
               | cratermoon wrote:
               | That's not what diminishing returns means.
        
               | tptacek wrote:
               | That's true, but it's nearest bit of evidence at hand to
               | how the "returns" could be "diminishing". I'm fine if
               | someone wants to provide any _other_ coherent claim as to
               | how we 're in a "diminishing returns" state with coding
               | LLMs right now.
        
               | cratermoon wrote:
               | https://techcrunch.com/2024/11/20/ai-scaling-laws-are-
               | showin...
        
               | tptacek wrote:
               | What's the implication of this story to someone who had
               | started writing code with LLMs 6 months ago, and is today
               | as well. How has their experience changed? Have the
               | returns to that activity diminished?
        
           | username223 wrote:
           | > As an anology: Most people wouldn't want to dig through
           | machine code/compiler output. At least not without proper
           | tooling.
           | 
           | My analogy is GUI builders from the late 90s that let you
           | drag elements around, then generated a pile of code. They
           | worked sometimes, but God help you if you wanted to do
           | something the builder couldn't do, and had to edit the
           | generated code.
           | 
           | Looking at compiler output is actually more pleasant. You
           | profile your code, find the hot spots, and see that something
           | isn't getting inlined, vectorized, etc. At that point you can
           | either convince the compiler to do what you want or rewrite
           | it by hand, and the task is self-contained.
        
           | layer8 wrote:
           | The compiler analogy doesn't quite fit, because the essential
           | difference is that source code is (mostly) deterministic and
           | thus can be reasoned about (you can largely predict in detail
           | what behavior code will exhibit even before writing it),
           | which isn't the case for LLM instructions. That's a major
           | factor why many developers don't like AI coding, because
           | every prompt becomes a non-reproducible, literally un-
           | reasonable experiment.
        
             | steveklabnik wrote:
             | I think the "largely" in there is interesting and load-
             | bearing: a lot of people find compiler output quite
             | surprising!
             | 
             | But that doesn't mean that it's not a gradient, and LLM
             | output may be meaningfully harder to reason about than
             | compiler output, and that may matter.
        
               | layer8 wrote:
               | Assembly output may sometimes be surprising, but
               | maintains the language semantics. The surprise comes from
               | either misunderstanding the language semantics, or from
               | performance aspects. Nevertheless, if you understand the
               | language semantics correctly, the program behavior
               | resulting from the output is deterministic and
               | predictable. This is not true for LLMs.
        
               | steveklabnik wrote:
               | I don't disagree on a factual level, I am just describing
               | some people's subjective experiences: some language
               | semantics can be very subtle, and miscompilation bugs are
               | real. Determining if it is just an aggressive
               | optimization or a real codegen bug can be difficult
               | sometimes, that's all.
        
         | pandler wrote:
         | In addition to not enjoying it, I also don't learn anything,
         | and I think that makes it difficult to sustain anything in the
         | middle of the spectrum between "I won't even look at the code;
         | vibes only" and advanced autocomplete.
         | 
         | My experience has been that it's difficult to mostly vibe with
         | an agent, but still be an active participant in the codebase.
         | That feels especially true when I'm using tools, frameworks,
         | etc that I'm not already familiar with. The vibing part of the
         | process simultaneously doesn't provide me with any deeper
         | understanding or experience to be able to help guide or
         | troubleshoot. Same thing for maintaining existing skills.
        
           | daxfohl wrote:
           | It's like trying to learn math by reading vs by doing. If all
           | you're doing is reading, it robs you of the depth of
           | understanding you'd gain by solving things yourself. Going
           | down wrong paths, backtracking, finally having that aha
           | moment where things click, is the only way to truly
           | understand something.
           | 
           | Now, for all the executives who are trying to force-feed
           | their engineering team to use AI for everything, this is the
           | result. Your engineering staff becomes equivalent to a
           | mathematician who has never actually done a math problem,
           | just read a bunch of books and trusted what was there. Or a
           | math tutor for your kid who "teaches" by doing your kid's
           | homework for them. When things break and the shit hits the
           | fan, is that the engineering department you want to have?
        
             | zdragnar wrote:
             | I'm fairly certain that I lost a job opportunity because
             | the manager interviewing me kept asking me variations of
             | how I use AI when I code.
             | 
             | Unless I'm stuck while experimenting with a new language or
             | finding something in a library's documentation, I don't use
             | AI at all. I just don't feel the need for it in my primary
             | skill set because I've been doing it so long that it would
             | take me longer to get AI to an acceptable answer than doing
             | it myself.
             | 
             | The idea seemed rather offensive to him, and I'm quite glad
             | I didn't go to work there, or anywhere that using AI is an
             | expectation rather than an option.
             | 
             | I definitely don't see a team that relies on it heavily
             | having fun in the long run. Everyone has time for new
             | features, but nobody wants to dedicate time to rewriting
             | old ones that are an unholy mess of bad assumptions and
             | poorly understood.
        
               | bluefirebrand wrote:
               | My company recently issued an "Use AI in your workflow or
               | else" mandate and it has absolutely destroyed my
               | motivation to work
               | 
               | Even though there are still private whispers of "just
               | keep doing what you're doing no one is going to be fired
               | for not using AI", just the existence of the top down
               | mandate has made me want to give up and leave
               | 
               | My fear is that this is every company right now, and I'm
               | basically no longer a fit for this industry at all
               | 
               | Edit: I'm a long way from retirement unfortunately so I'm
               | really stuck. Not sure what my path forward is. Seems
               | like a waste to turn away from my career that I have
               | years of experience doing, but I struggle like crazy to
               | use AI tools. I can't get into any kind of flow with
               | them. I'm constantly frustrated by how aggressively they
               | try to jump in front of my thought process. I feel like
               | my job changed from "builder" to "reviewer" overnight and
               | reviewing is one of the least enjoyable parts of the job
               | for me
               | 
               | I remember an anecdote about Ian McKellen crying on a
               | green screen set when filming the Hobbit, because Talking
               | to a tennis ball on a stick wasn't what he loved about
               | acting
               | 
               | I feel similarly with AI coding I think
        
               | daxfohl wrote:
               | The other side of me thinks that maybe the eventual
               | landing point of all this is a merger of engineering and
               | PM. A sizeable chunk of engineering work isn't really
               | anything new. CRUD, jobs, events, caching,
               | synchronization, optimizing for latency, cost, staleness,
               | redundancy. Sometimes it amazes me that we're still
               | building so many ad-hoc ways of doing the same things.
               | 
               | Like, say there's a catalog of 1000 of the most common
               | enterprise (or embedded, or UI, or whatever) design
               | patterns, and AI is good at taking your existing system,
               | your new requirements, identifying the best couple design
               | patterns that fit, give you a chart with the various
               | tradeoffs, and once you select one, are able to add that
               | pattern to your existing system, with the details that
               | match your requirements.
               | 
               | Maybe that'd be cool? The system/AI would then be able to
               | represent the full codebase as an integration of various
               | patterns, and an engineer, or even a technical PM, could
               | understand it without needing to dive into the codebase
               | itself. And hopefully since everything is managed by a
               | single AI, the patterns are fairly consistent across the
               | entire system, and not an amalgamation of hundreds of
               | different individuals' different opinions and ideals.
               | 
               | Another nice thing would be that huge migrations could be
               | done mostly atomically. Currently, things like, say,
               | adding support in your enterprise for, say, dynamic
               | authorization policies takes years to get every team to
               | update their service's code to handle the new authz
               | policy in their domain, and so the authz team has to
               | support the old way and the new way, and a way to sync
               | between them, roughly forever. With AI, maybe all this
               | could just be done in a single shot, or over the course
               | of a week, with automated deployments, backfill, testing,
               | and cleanup of the old system. And so the authz team
               | doesn't have to deal with all the "bugging other teams"
               | or anything else, and the other teams also don't have to
               | deal with getting bugged or trying to fit the migration
               | into their schedules. To them it's an opaque thing that
               | just happened, no different from a library version
               | update.
               | 
               | With that, there's fewer things in flight at any one
               | time, so it allows engineers and PMs to focus on their
               | one deliverable without worrying how it's affecting
               | everyone else's schedules etc. Greater speed begets
               | greater serializability begets better architecture begets
               | greater speed.
               | 
               | So, IDK, maybe the end game of AI will make the job more
               | interesting rather than less. We'll see.
        
               | ryandrake wrote:
               | I just don't understand your company and the company OP
               | interviewed for. This is like mandating everyone use
               | syntax highlighting or autocomplete, or sit in special
               | type of chair or use a standing desk, and making their
               | use a condition for being hired. Why are companies so
               | insistent that their developers "use AI somehow" in their
               | workflows?
        
               | bluefirebrand wrote:
               | Shareholders are salivating at the prospect of doing
               | either the same amount of work with fewer salaries or
               | more work with the same salaries
               | 
               | There is nothing a VC loves more than the idea of
               | extracting more value from people without investing more
               | into them
        
               | daxfohl wrote:
               | FOMO. They don't want to risk being the one company left
               | behind because their engineers haven't learned to use AI
               | as efficiently as others.
        
               | ponector wrote:
               | There are lots of ways to use AI coding tools.
               | 
               | Cursor is great for fuzy search across the legacy
               | project. Requests like "how you do X here" can help a lot
               | while fixing old bug.
               | 
               | Or adding a documents. Commit description generated based
               | on diff. Or adding a javadoc to your methods.
               | 
               | Whatever step in your workflow which consists of
               | rewriting existing text but not creating anything new -
               | use Cursor or similar AI tool.
        
         | Kiro wrote:
         | I'm the opposite. I haven't had this much fun programming in
         | years. I can quickly iterate, focus on the creative parts and
         | it really helps with procrastination.
        
         | timr wrote:
         | > My main issue with vibe coding etc is I simply don't enjoy
         | it. Having a conversation with a computer to generate code that
         | I don't entirely understand and then have to try to review is
         | just not fun. It doesn't give me any of the same kind of
         | intellectual satisfaction that I get out of actually writing
         | code.
         | 
         | I am the opposite. After a few decades of writing code, it
         | wasn't "fun" to write yet another file parser or hook widget A
         | to API B -- which is >99% of coding today. I moved into product
         | management because while I still enjoy _building_ things, it 's
         | much more satisfying/challenging to focus on the higher-level
         | issues of making a product that solves a need. My professional
         | life became writing specs, and reviewing code. It's therefore
         | actually kind of fun to work with AI, because I can think
         | technically, but I don't have to do the tedious parts that make
         | me want to descend into a coma.
         | 
         | I could care less if I'm writing a spec for a robot, or I'm
         | writing a spec for a junior front-end engineer. They're both
         | going to screw up, and I'm going to have to spend time
         | explaining the problem again and again...at least the robot
         | never complains and _tries really hard_ to do exactly what I
         | ask, instead of slacking off, doing something more
         | intellectually appealing, getting mired in technical
         | complexity, etc.
        
           | kiitos wrote:
           | > After a few decades of writing code, it wasn't "fun" to
           | write yet another file parser or hook widget A to API B --
           | which is >99% of coding today.
           | 
           | If this is your experience of programming, then I feel for
           | you, my dude, because that sucks. But it is definitely not my
           | experience of programming. And so I absolutely reject your
           | claim that this experience represents "99% of programming" --
           | that stuff is rote and annoying and automate-able and all
           | that, no argument, but it's not what any senior-level
           | engineer worth their salt is spending any of their time on!
        
             | NewsaHackO wrote:
             | People who don't do 1)API connecting, 2)Web design using
             | popular frameworks or 3) requirements wrangling with
             | business analysts have jobs that will not be taken over by
             | AI anytime soon. I think 99% of jobs is pushing it, but I
             | definitely think the vast majority of IT jobs fit into the
             | above categories. Another benchmark would be how much of
             | your job is closer to research work.
        
           | dlisboa wrote:
           | You touched on the significant thing that separates most of
           | the AI code discourse in the two extremes: some people just
           | don't like programming and see it as a simple means to an
           | end, while others love the process of actually crafting code.
           | 
           | Similar to the differences between an art collector and a
           | painter. One wants the ends, the other desires the means.
        
             | morkalork wrote:
             | I think I could be happy switching between the two modes.
             | There's tasks that are completely repetative slop that I've
             | fully offloaded to AI with great satisfaction. There's
             | others I enjoy that I prefer to use AI for consultation
             | only. Regardless, few people liked doing code review before
             | with their peers and somehow we've increased one of the
             | least fun parts of the job.
        
             | timr wrote:
             | That's not fair, and not what I am saying at all.
             | 
             | I enjoy writing code. I just don't enjoy writing code _that
             | I 've written a thousand times before_. It's like saying
             | that Picasso should have enjoyed painting houses for a
             | living. They're both painting, right?
             | 
             | (to be painfully clear, I'm not comparing myself to
             | Picasso; I'm extending on your metaphor.)
        
               | bluefirebrand wrote:
               | You would rather debug the low quality LLM code that you
               | know you could write better, a thousand times?
        
               | timr wrote:
               | Well, I don't write bugs in _my_ code, of course, but let
               | 's just say that you were the type of person who does:
               | having a bot that writes code 100x faster than you, that
               | also occasionally makes mistakes (but can also _fix_
               | them!), is still a huge win.
        
               | bluefirebrand wrote:
               | > occasionally makes mistakes
               | 
               | Well. Maybe we have to agree to disagree but I think it
               | makes mistakes far more frequently than I do
               | 
               | Even if it makes mistakes exactly as often as I do,
               | making 100x as many mistakes in the same amount as time
               | seems like it would be absolutely impossible to keep up
               | with
        
             | d0100 wrote:
             | I love programming, I just don't like CRUDing, or
             | API'ing...
             | 
             | I also love programming behaviours and interactions, just
             | not creating endless C# classes and looking at how to
             | implement 3D math
             | 
             | After a long day at the CRUD factory, being able to vibe
             | code as a hobby is fun. Not super productive, but it's
             | better than the alternative (scrolling reels or playing
             | games)
        
             | tptacek wrote:
             | I love coding, do it for fun outside of my job, and find
             | coding with an LLM very enjoyable.
        
               | icedchai wrote:
               | I've been experimenting with LLM coding for the past few
               | months on some personal projects. I find it makes coding
               | those projects more enjoyable since it eliminates much of
               | the tedium that was causing me to delay the project in
               | the first place.
        
               | timr wrote:
               | Exactly the same for me...now whenever I hit something
               | like _" oh god, I want to change the purpose of this
               | function/variable, but I need to go through 500 files,
               | and see where it's used, then make local changes, then
               | re-test everything..."_, I can just tell the bot to do
               | it.
               | 
               | I know a lot of folks would say that's what search &
               | replace is for, but it's far easier to ask the bot to do
               | it, and then check the work.
        
               | cesarb wrote:
               | > "oh god, I want to change the name of this
               | function/variable, but I need to go through 500 files,
               | and see where it's used, then make local changes, then
               | re-test everything..."
               | 
               | Forgive me for being dense, but isn't it just clicking
               | the "rename" button on your IDE, and letting it propagate
               | the change to all definitions and uses? This already
               | existed and worked fine well before LLMs were invented.
        
               | tptacek wrote:
               | Yes, that particular example modern editors do just fine.
               | Now imagine having that for almost any rote
               | transformation you wanted regardless of complexity (so
               | long as the change was rote and describable).
        
               | timr wrote:
               | Yeah, sorry...I re-read the comment and realized I wasn't
               | being clear. It's bigger than just search/replace.
               | Already updated what I wrote.
               | 
               | The far more common situation is that I'm refactoring
               | something, and I realize that I want to make some change
               | to the semantics or signature of a method (say, the
               | return value), and now I can't just use search w/o _also_
               | validating the context of every change. That 's annoying,
               | and today's bots do a great job of just handling it.
               | 
               | Another one, I just did a second ago: "I think this
               | method X is now redundant, but there's a minor difference
               | between it, and method Y. Can I remove it?"
               | 
               | Bot went out, did the obvious scan for all references to
               | X, but then _evaluated each call context to see if I
               | could use Y instead_.
               | 
               | (But even in the case of search & replace, I've had my
               | butt saved a few times by agent when it caught something
               | I wasn't considering....)
        
               | tptacek wrote:
               | I really like working with LLMs but one thing I've
               | noticed is that the obvious transformation of "extract
               | this functionality into a helper function and then apply
               | that throughout the codebase" is one I really actually
               | enjoy doing myself; replacing 15 lines of boilerplate-y
               | code in a couple dozen places with a single helper call
               | is _really_ satisfying; it's like my ASMR.
        
               | timr wrote:
               | Hah, well, to each their own. That's exactly the kind of
               | thing that makes me want to go outside and take a walk.
               | 
               | Regardless of what your definition of horrible and boring
               | happens to be, just being able to tell the bot to do a
               | horrible boring thing and having it done with like a
               | junior level intelligence is so experience enhancing that
               | it makes coding more fun.
        
               | tptacek wrote:
               | I find elimination of inertia and preservation of
               | momentum to be the biggest wins; it's just that my
               | momentum isn't depleted by extracting something out into
               | a helper.
               | 
               | People should try this kind of coding a couple times just
               | because it's an interesting exercise in figuring out what
               | parts of coding are important to you.
        
           | icedchai wrote:
           | Same. After doing this for decades, so much programming work
           | is tedious. Maybe 5% to 20% of the work is interesting. If I
           | can get a good chunk of that other 80%+ built out quickly
           | with a reasonable level of quality, then we're good.
        
           | prmph wrote:
           | After like the 20th time explaining the same (simple) problem
           | to the AI that it is unable to fix, you just might change
           | your mind [1]. At that point you just have to jump in and get
           | dirty.
           | 
           | Do this a few times and you start to realize it is kinda of
           | worse than just being in the driver's seat in terms of the
           | coding right from the start. For one thing, when you jump in,
           | you are working with code that is probably architectured
           | quite differently from the way you normally do, and you have
           | no developed the deep mental model that is needed to work
           | with the code effectively.
           | 
           | Not to say the LLMs are not useful, especially in agent mode.
           | But the temptation is always to trust and task them with more
           | than they can handle. maybe we need an agent that limits the
           | scope of what you can ask it to do, to keep you involved at
           | the necessary level.
           | 
           | People keep thinking we are at the level where we can forget
           | about the nitty gritty of the code and rise up the
           | abstraction level, when this is nothing close to the truth.
           | 
           | [1] Source: me last week trying really hard to work like you
           | are talking about with Claude Code.
        
             | timr wrote:
             | > After like the 20th time explaining the same (simple)
             | problem to the AI that it is unable to fix, you just might
             | change your mind [1]. At that point you just have to jump
             | in and get dirty.
             | 
             | You're assuming that I haven't. Yes, sometimes you have to
             | do it yourself, and the people who are claiming that you
             | can replace experienced engineers with these are wrong (at
             | least for now, and for non-trivial problems).
             | 
             | > Do this a few times and you start to realize it is kinda
             | of worse than just being in the driver's seat in terms of
             | the coding right from the start. For one thing, when you
             | jump in, you are working with code that is probably
             | architectured quite differently from the way you normally
             | do, and you have no developed the deep mental model that is
             | needed to work with the code effectively.
             | 
             | Disagree. There's not a single piece of code I've written
             | using these that I haven't carefully curated myself.
             | Usually the result (after rounds of prompting) is smaller,
             | significantly better, and closer to my original intended
             | design than what I got out of the machine on first prompt.
             | 
             | I still find them to be a significant net enhancement to my
             | productivity. For me, it's very much like working with a
             | tireless junior engineer who is available at all hours,
             | willing to work through piles of thankless drudgery without
             | complaint, and also codes about 100x faster than I do.
             | 
             | But again, I know what I'm doing. For an inexperienced
             | coder, I'm more inclined to agree with your comment. The
             | first drafts that these things emit is often pretty bad.
        
         | doctoboggan wrote:
         | > and then have to try to review
         | 
         | I think (at least by the original definition[0]) this is not
         | vibe coding. You aren't supposed to be reviewing the code, just
         | execute and pray.
         | 
         | [0]: https://xcancel.com/karpathy/status/1886192184808149383
        
         | potatolicious wrote:
         | Yeah, I will say now that I've played with the AI coding tools
         | more, it seems like there are two distinct use cases:
         | 
         | 1 - Using coding tools in a context/language/framework you're
         | already familiar with.
         | 
         | This one I have been having a lot of fun with. I am in a good
         | position to review the AI-generated code, and also examine its
         | implementation plan to see if it's reasonable. I am also able
         | to decompose tasks in a way that the AI is better at handling
         | vs. giving it vague instructions that it then does poorly on.
         | 
         | I feel more in control, and it feels like the AI is stripping
         | away drudgery. For example, for a side project I've been using
         | Claude Code with an iOS app, a domain I've spent many years in.
         | It's a treat - it's able to compose a lot of boilerplate and do
         | light integrations that I can easily write myself, but find
         | annoying.
         | 
         | 2 - Using coding tools in a context/language/framework you
         | don't actually know.
         | 
         | I know next to nothing about web frontend frameworks, but for
         | various side projects wanted to stand up some simple web
         | frontends, and this is where AI code tools have been a
         | frustration.
         | 
         | I don't know what exactly I want from the AI, because I don't
         | know these frameworks. I am poorly equipped to review the code
         | that it writes. When it fails (and it fails a lot) I have
         | trouble diagnosing the underlying issues and fixing it myself -
         | so I have to re-prompt the LLM with symptoms, leading to
         | frustrating loops that feel like two cave-dwellers trying to
         | figure out a crashed spaceship.
         | 
         | I've been able to stand up a lot of stuff that I otherwise
         | would never have been able to, but I'm 99% sure the code is
         | utter shit, but I also am not in a position to really quantify
         | or understand the shit in any way.
         | 
         | I suppose if I were properly "vibe coding" I shouldn't care
         | about the fact that the AI produced a katamari ball of code
         | held together by bubble gum. But I _do_ care.
         | 
         | Anyway, for use case #1 I'm a big fan of these tools, but it's
         | really not the "get out of learning your shit" card that it's
         | sometimes hyped up to be.
        
           | saratogacx wrote:
           | For case 2, I've had a lot of luck starting with asking the
           | LLM "I have experience in X, Y, and Z technologies, help me
           | translate this project in those terms, list anything this
           | code does that doesn't align with the typical use of the
           | technologies they've chosen". This has given me a great
           | "intro" to move me closer to being able to understand.
           | 
           | Once I've done that and piked a few follow up questions, I
           | feel much better in diving into the generated code.
        
         | rowanseymour wrote:
         | This was my experience until recently.. now I'm currently quite
         | enjoying assigning small PRs to copilot and working through
         | them via the GitHub PR interface. It's basically like managing
         | a junior programmer but cheaper and faster. Yes that's not as
         | much fun as writing code but there isn't time for me to write
         | all the code myself.
        
           | cloverich wrote:
           | Can you elaborate on the "assign PR's" bit?
           | 
           | I use Cursor / ChatGPT extensively and am ready to dip into
           | more of an issue / PR flow but not sure what people are doing
           | here exactly. Specifically for side projects, I tend to think
           | through high level features, then break it down into sub-
           | items much like a PM. But I can easily take it a step further
           | and give each sub issue technical direction, e.g. "Allow font
           | customization: Refactor tailwind font configuration to use
           | CSS variables. Expose those CSS variables via settings
           | module, and add a section to the Preferences UI to let the
           | user pick fonts for Y categories via dropdown; default to X Y
           | Z font for A B C types of text".
           | 
           | Usually I spend a few minutes discussing w/ ChatGPT first,
           | e.g. "What are some typical idioms for font configuration in
           | a typical web / desktop application". Once I get that idea
           | solidified I'd normally start coding, but could just as
           | easily hand this part off for simple-ish stuff and start
           | ironing out he next feature. In the time I'd usually have
           | planned the next 1-2 months of side project work (which
           | happens, say, in 90 minute increments 2x a week), the Agent
           | could knock out maybe half of them. For a project i'm
           | familiar with, I expect I can comfortably review and comment
           | on a PR with much less mental energy than it would take to
           | re-open my code editor for my side project, after an entire
           | day of coding for work + caring for my kids. Personally I'm
           | pretty excited about this.
        
             | rowanseymour wrote:
             | I have not had great experiences interacting directly with
             | LLMs except when asking for a snippet of code that is
             | generic and commonly done. Now with GitHub Copilot (you
             | need a Pro Plus I think) I'm creating an issues, assigning
             | to Copilot, and then having a back and forth on the PR with
             | Copilot until it's right. Exactly as I would with a junior
             | dev and honestly it's the first time I've felt like AI
             | could make a noticeable difference to my productivity.
        
             | steveklabnik wrote:
             | I'm not your parent, but Claude at least has the ability to
             | integrate with GitHub such that you can say "@claude please
             | try to fix this bug" on an issue and it'll just go do it.
        
         | gs17 wrote:
         | > My main issue with vibe coding etc is I simply don't enjoy
         | it.
         | 
         | I _almost_ enjoy it. It 's kind of nice getting to feel like
         | management for a second. But the moment it hits a bug it can't
         | fix and you have to figure out its horrible mess of code any
         | enjoyment is gone. It's really nice for "dumb" changes like
         | renumbering things or very basic refactors.
        
           | tptacek wrote:
           | When the agent spins out, why don't you just take the wheel
           | and land the feature yourself? That's what I do. I'm having
           | trouble integrating these two skeptical positions of "LLMs
           | suck all the joy out of actually typing code into an editor"
           | and "LLMs are bad because they sometimes force you to type
           | code into an editor".
        
         | tobr wrote:
         | I tried Cursor again recently. Starting with an empty folder,
         | asking it to use very popular technologies that it surely must
         | know a lot about (Typescript, Vite, Vue, and Tailwind). Should
         | be a home run.
         | 
         | It went south immediately. It was confused about the
         | differences between Tailwind 3 and 4, leading to a broken
         | setup. It wasn't able to diagnose the problem but just got more
         | confused even with patient help from me in guiding it. Worse,
         | it was unable to apply basic file diffs or deletes reliably. In
         | trying to diagnose whether this is a known issue with Cursor,
         | it decided to search for bug reports - great idea, except it
         | tried to search _the codebase_ for it, which, I remind you,
         | only contained code that it had written itself over the past
         | half hour or so.
         | 
         | What am I doing wrong? You read about people hyping up this
         | technology - are they even using it?
         | 
         | EDIT: I want to add that I did not go into this
         | antagonistically. On the contrary, I was excited to have a use
         | case that I thought must be a really good fit.
        
           | windows2020 wrote:
           | My recent experience has been similar.
           | 
           | I'm seeing that the people hyping this up aren't programmers.
           | They believe the reason they can't create software is they
           | don't know the syntax. They whip up a clearly malfunctioning
           | and incomplete app with these new tools and are amazed at
           | what they're created. The deficiencies will sort themselves
           | out soon, they believe. And then programmers won't be needed
           | at all.
        
             | norir wrote:
             | Most people do not have the talent and/or discipline to
             | become good programmers and resent those who do. This alone
             | explains a lot of the current argument.
        
           | gs17 wrote:
           | > It was confused about the differences between Tailwind 3
           | and 4
           | 
           | I have the same issue with Svelte 4 vs 5. Adding some notes
           | to the prompt to be used for that project helps sort of.
        
             | tobr wrote:
             | It didn't seem like it ever referred to documentation? So,
             | obviously if it's only going to draw on its "instinctual"
             | knowledge of Tailwind, it's more likely to fallback on a
             | version that's been around for longer, leading to
             | incompatibilities with the version that's actually
             | installed. A human doing the same task would probably have
             | the setup guide on the website at hand if they realized
             | they were feeling confused.
        
               | antifa wrote:
               | It would be nice if you could download some minified-for-
               | LLM doc file that gave the LLM the public interface of
               | the lib(s) you were using.
        
               | steveklabnik wrote:
               | https://context7.com/jj-vcs/jj?tokens=55180 is supposed
               | to be this, I have yet to try it though.
        
           | varjag wrote:
           | I had some success doing two front-end projects. One in 2023
           | using Mixtral 7b local model and one just this month with
           | Codex. I am an experienced programmer (35 years coding, 28
           | professionally). I hate Web design and I never cared to learn
           | JavaScript.
           | 
           | The first project was a simple touch based control panel that
           | communicates via REST/Websocket and runs a background visual
           | effect to prevent the screen burn-in. It took a couple of
           | days to complete. There were often simple coding errors but
           | trivial enough to fix.
           | 
           | The second is a 3D wireframe editor for distributed
           | industrial equipment site installations. I started by just
           | chatting with o3 and got the proverbial 80% within a day. It
           | includes orbital controls, manipulation and highlighting of
           | selected elements, property dialogs. Very soon it became too
           | unwieldy for the laggard OpenAI chat UI so I switched to
           | Codex to complete most of the remaining features.
           | 
           | My way with it is mostly:
           | 
           | - ask no fancy frameworks: my projects are plain JavaScript
           | that I don't really know, makes no sense to pile on React and
           | TypeScript atop of it that I am even less familiar with
           | 
           | - explain what I want by defining data structures I believe
           | are the best fit for internal representation
           | 
           | - change and test one thing at a time, implement a test for
           | it
           | 
           | - split modules/refactor when a subsystem gets over a few
           | hundred LOC, so that the reasoning can remain largely
           | localized and hierarchical
           | 
           | - make o3 write an llm-friendly general design document and
           | description of each module. Codex uses it to check the
           | assumptions.
           | 
           | As mentioned elsewhere the code is mediocre at best and it
           | feels a bit like when I've seen a C compiler output vs my
           | manually written assembly back in the day. It works tho, and
           | it doesn't look to be terribly inefficient.
        
           | steveklabnik wrote:
           | Tailwind 4 has been causing Claude a lot of problems for me,
           | especially when upgrading projects.
           | 
           | I managed to get it to do one just now, but it struggled
           | pretty hard, and still introduced some mistakes I had to fix.
        
           | cube2222 wrote:
           | Just trying to help explain the issues you've been hitting,
           | not to negate your experience.
           | 
           | First, you might've been using a model like Sonnet 3.7, whose
           | knowledge cutoff doesn't include Tailwind 4.0. The model
           | should know a lot about the tech stack you mentioned, but it
           | might not know the latest major revisions if they were very
           | recent. If that is the case (you used an older model), then
           | you should have better luck with a model like Sonnet 4 / Opus
           | 4 (or by providing the relevant updated docs in the chat).
           | 
           | Second, Cursor is arguably not the top tier hotness anymore.
           | Since it's flat-rate subscription based, the default mode of
           | it will have to be pretty thrifty with the tokens it uses.
           | I've heard (I don't use Cursor) in Cursor Max Mode[0]
           | improves on that (where you pay based on tokens used), but
           | I'd recommend just using something like Claude Code[1],
           | ideally with its VS Code or IntelliJ integration.
           | 
           | But in general, new major versions of sdk's or libraries will
           | cause you a worse experience. Stable software fares much
           | better.
           | 
           | Overall, I find AI extremely useful, but it's hard to know
           | which tools and even ways of using these tools are the
           | current state-of-the-art without being immersed into the
           | ecosystem. And those are changing pretty frequently. There's
           | also a ton of over-the-top overhyped marketing of course.
           | 
           | [0]: https://docs.cursor.com/context/max-mode
           | 
           | [1]: https://www.anthropic.com/claude-code
        
           | tomtomistaken wrote:
           | > What am I doing wrong?
           | 
           | Wrong tool..
        
         | 9d wrote:
         | Considering _the actual Vatican_ literally linked AI to _the
         | apocalypse_ , and did so in _the most official capacity_ [1], I
         | don't think avoiding AI has to be ludditism.
         | 
         | [1] Antiqua et Nova p. 105, cf. Rev. 13:15
        
           | 9d wrote:
           | Full link and relevant quote:
           | 
           | https://www.vatican.va/roman_curia/congregations/cfaith/docu.
           | ..
           | 
           | > Moreover, AI may prove even more seductive than traditional
           | idols for, unlike idols that "have mouths but do not speak;
           | eyes, but do not see; ears, but do not hear" (Ps. 115:5-6),
           | AI can "speak," or at least gives the illusion of doing so
           | (cf. Rev. 13:15).
           | 
           | It quotes Rev. 13:15 which says (RSVCE):
           | 
           | > and it was allowed to give breath to the image of the beast
           | so that the image of the beast should even speak, and to
           | cause those who would not worship the image of the beast to
           | be slain.
        
             | ultimafan wrote:
             | That was a very interesting read, thanks for linking it!
             | 
             | I think the unfortunate reality of human innovation is that
             | too many people consider that technological progress is
             | always good for progresses sake. Too many people create new
             | tools, tech, etc. without really stopping to take a moment
             | and think or have a discussion on what the absolute worst
             | case applications of their creation will be and how
             | difficult it'd be to curtail that kind of behavior. Instead
             | any potential (before creation) and actual (when it's
             | released) human suffering is hand waved away as growing
             | pains necessary for science to progress. Like those
             | websites that search for people's online profiles based on
             | image inputs sold by their creators as being used to find
             | long lost friends or relatives when everyone really knows
             | it's going to be swamped by people using it to doxx or
             | stalk their victims, or AI photo generation models for
             | "personal use" being used to deep fake nudes to embarrass
             | and put down others. In many such cases the creators sleep
             | easy at night with the justification that it's not THEIR
             | fault people are misusing their platforms, they provided a
             | neutral tool and are absolved of all responsibility. All
             | the while they are making money or raking in clout fed by
             | the real pain of real people.
             | 
             | If everyone took the time to weigh the impact of what
             | they're doing even half as diligently as that above article
             | (doesn't even have to be from a religious perspective) the
             | world would be a lot brighter for it.
        
               | 9d wrote:
               | > Too many people create new tools, tech, etc. without
               | really stopping to take a moment and think or have a
               | discussion on what the absolute worst case applications
               | of their creation will be
               | 
               | "Your scientists were so preoccupied with whether or not
               | they could, they didn't stop to think if they should."
               | -Jeffrey L. Goldblum when ILM showed him an early
               | screening of Jurassic Park.
               | 
               | > In many such cases the creators sleep easy at night
               | with the justification that it's not THEIR fault people
               | are misusing their platforms, they provided a neutral
               | tool and are absolved of all responsibility.
               | 
               | The age old question of gun control.
        
               | ultimafan wrote:
               | Guns (and really most forms of progress in warfare and
               | violence) undoubtedly fall under a similar conundrum.
               | 
               | Funny to note that at least one inventor who contributed
               | greatly to modern warfare (the creator of the gatling
               | gun) did seem to reflect on his future impact but figured
               | it'd go in the opposite direction- that a weapon that
               | could replace a hundred soldiers with one would make wars
               | smaller and less devastating, not more!
        
           | 9d wrote:
           | I emphasize that it's _the Vatican_ because they are the most
           | theologically careful of all. This isn 't some church with a
           | superstitious pastor who jumps to conclusions about the
           | rapture at a dime drop. This is the Church which is hesitant
           | to say _literally anything_ about the book of Revelation _at
           | all_ , which is run by tired men who just want to keep the
           | status quo so they can hopefully hit retirement without any
           | trouble.
        
         | 9d wrote:
         | > It doesn't give me any of the same kind of intellectual
         | satisfaction that I get out of actually writing code.
         | 
         | Writing code is a really fun creative process:
         | 
         | 1. Conceive an exciting and useful idea
         | 
         | 2. Comprehend the idea fully from its top to its bottom
         | 
         | 3. Translate the idea into specific instructions utilizing
         | known mechanics
         | 
         | 4. Find the beautiful middleground between instruction and
         | abstraction
         | 
         | 5. Write lots and lots of code!
         | 
         | 6. Find where your conception was flawed and fix it as
         | necessary.
         | 
         | 7. Repeat steps 2-6 until the thing works just as you dreamed
         | or you give up.
         | 
         | It's maybe the most fun and exciting mixture of art and
         | technology ever.
        
           | 9d wrote:
           | I forgot to say the second part:
           | 
           | Using AI is the same as code-review or being a PM:
           | 
           | 1. Have an ideal abstraction
           | 
           | 2. Reverse engineer an actual abstraction from code
           | 
           | 3. Compare the two and see if they match up
           | 
           | 4. If they don't, ask the author to change or fix it until it
           | does
           | 
           | 5. Repeat steps 2-4 until it does
           | 
           | This is incredibly _not_ fun, because it 's _not_ a creative
           | process.
           | 
           | You're essentially just an accountant or calculator at this
           | point.
        
         | xg15 wrote:
         | > _that I don 't entirely understand_
         | 
         | That's the bigger issue in the whole LLM hype that irks me. The
         | tacit assumption that _actually understanding things_ is now
         | obsolete, as long as the LLM delivers results. And if it doesn
         | 't we can always do yet another finetuning or try yet another
         | magic prompt incantation to try and get it back on track. And
         | that this is somehow progress.
         | 
         | It feels like going back to pre-enlightenment times and
         | collecting half-rationalized magic spells instead of having a
         | solid theoretical framework that let's you reason about your
         | systems.
        
           | AnimalMuppet wrote:
           | Well... I'm torn here.
           | 
           | There is a magic in understanding.
           | 
           | There is a different magic in being able to use something
           | that you don't understand. Libraries are an instance of this.
           | (For that matter, so is driving a car.)
           | 
           | The problem with LLMs is that you don't understand, and the
           | stuff that it gives you that you don't understand isn't
           | solid. (Yeah, not all libraries are solid, either. LLMs give
           | you stuff that is less solid than that.) So LLMs give you a
           | taste of the magic, but not much of the substance.
        
         | _aavaa_ wrote:
         | > Luddite
         | 
         | The luddites were not against progress or the technology
         | itself. They were opposed to how it was used, for whose
         | benefit, and for whose loss [0].
         | 
         | The AI-Luddite position isn't ain't AI, it's (among other
         | things anti mass copyright theft from creators to train
         | something with the explicit goal of putting them out of a job,
         | without compensation. All while producing an objectively
         | inferior product but passing it off as a higher quality one.
         | 
         | [0]: https://www.hachettebookgroup.com/titles/brian-
         | merchant/bloo...
        
       | jplusequalt wrote:
       | Wholeheartedly agree. I can't help but think that proponents of
       | LLMs are not seriously considering the impact it will have on our
       | ability to communicate with each other, or to reason on our own
       | accord without the assistance of an LLM.
       | 
       | It confounds me how these people would trust the same companies
       | who fueled the decay of social discourse via the internet with
       | the creation of AI models which aim to encroach on every aspect
       | of our lives.
        
         | soulofmischief wrote:
         | Some of us realize this technology was inevitable and are more
         | focused on figuring out how society evolves from here instead
         | of complaining and trying to legislate away math and prevent
         | honest people from using these tools while criminals freely
         | make use of them.
        
           | jplusequalt wrote:
           | >Some of us realize this technology was inevitable
           | 
           | How was any of this inevitable? Point me to which law of
           | physics demanded we reach this state of the universe. These
           | companies actively choose to train these models, and by
           | framing their development as "inevitable" you are helping
           | absolve them of any of the negative shit they have/will
           | cause.
           | 
           | >figuring out how society evolves from here instead of
           | complaining and trying to legislate away math
           | 
           | Could you not apply this exact logic to the creation of
           | nuclear weaponry--perhaps the greatest example of tragedy of
           | the commons?
           | 
           | >prevent honest people from using these tools while criminals
           | freely make use of them
           | 
           | What is your argument here? Should we suggest that everyone
           | learn how to money launder to even the playing field against
           | criminals?
        
             | soulofmischief wrote:
             | > Point me to which law of physics demanded we reach this
             | state of the universe
             | 
             |  _Gestures vaguely around at everything_
             | 
             | Intelligence is intelligence, and we are beginning to
             | really get down to the fundamentals of self-organization
             | and how order naturally emerges from chaos.
             | 
             | > Could you not apply this exact logic to the creation of
             | nuclear weaponry--perhaps the greatest example of tragedy
             | of the commons?
             | 
             | Yes, I can. Access to information is one thing (must be
             | carefully handled, but information wants to be free, and
             | there should be no law determining what one person can say
             | to another, barring NDAs and government classification of
             | national secrets (which doesn't include math and physics)
             | but we absolutely have international treaties to limit
             | nuclear proliferation, and we also have countries who do
             | not participate in these treaties, or violate them, which
             | illustrates my point that criminals will do whatever they
             | want.
             | 
             | > Should we suggest that everyone learn how to money
             | launder to even the playing field against criminals?
             | 
             | I have no interest in entertaining your straw mans. You're
             | intelligent enough to understand context.
        
           | collingreen wrote:
           | If only there were more nuances and options between those two
           | extremes! Oh well, back to the anti math legislation pits I
           | guess.
        
             | soulofmischief wrote:
             | There are many nuances to this argument, but I am not
             | trying to write a novel in a hacker news comment. Certain
             | broad strokes absolutely apply, and when you get down to
             | brass tacks it's about respecting personal freedom.
        
           | dowager_dan99 wrote:
           | This is a really negative and insulting comment towards
           | people who are struggling with a very real, very emotional
           | response to AI, and super-concerned about both the real and
           | potential negatives that the rabid boosters won't even
           | acknowledge. You don't have to "play the game" to make an
           | impact, it's valid to try and challenge the math and change
           | the rules too.
        
             | soulofmischief wrote:
             | > This is a really negative and insulting comment towards
             | people who are struggling with a very real, very emotional
             | response to AI
             | 
             | I disagree that my comment was negative at all. Many of
             | those same people (not all) spend a lot of time making
             | negative comments towards my work in AI, and tossing around
             | authoritarian ideas of restriction in domains they
             | understand like art and literature, while failing to also
             | properly engage with the real issues such as intelligent
             | mass surveillance and increased access to harmful
             | information. They would sooner take these new freedom
             | weapons out of the hands of the people while companies like
             | Palintir and NSO Group continue to use them at scale.
             | 
             | > super-concerned about both the real and potential
             | negatives that the rabid boosters won't even acknowledge
             | 
             | So am I, the difference is I am having a rational and not
             | an emotional response, and I have spent a lot of time
             | deeply understanding machine learning for the last decade
             | in order to be able to have a measured, informed response.
             | 
             | > You don't have to "play the game" to make an impact, it's
             | valid to try and challenge the math and change the rules
             | too
             | 
             | I firmly believe you cannot ethically outlaw math, and this
             | is part of why I have trouble empathizing with those who
             | feel otherwise. People are so quick to support
             | authoritarian power structures the moment it supposedly
             | benefits them or their world view. Meanwhile, the informed
             | are doing what they can to prevent this stuff from being
             | used to surveil and classify humanity, and to find a
             | balance that allows humans to coexist with artificial
             | intelligence.
             | 
             | We are not falling prey to reactionary politics and
             | disinformation, and we are not willing to needlessly expand
             | government overreach and legislate away critical individual
             | freedom in order to achieve our goals.
        
               | spencerflem wrote:
               | its not outlawing math, its outlawing what companies can
               | sell as a product.
               | 
               | that's like saying that you can't outlaw selling bombs in
               | a store because its "chemistry".
               | 
               | Or even for usage- can we not outlaw shooting someone
               | with a gun because it is "projectile physics"?
               | 
               | Im glad you do oppose Palantir - we're on the same side
               | and I support what you're doing! - but I also think
               | you're leaving the most effective solution on the table
               | by ignoring regulatory options.
        
               | soulofmischief wrote:
               | We can definitely regulate people's and especially
               | organizations' actions. But a lot of the emotional
               | responses to AI that I encounter are having a different
               | conversation, and many just blindly hate "AI" without
               | even understanding what it is, and want to infringe on
               | the freedoms of individuals to use this groundbreaking
               | technology. They're like the antivaxxers of the digital
               | world, and I encountered many of the same people whenever
               | I worked in the decentralized web space, using the same
               | vague arguments about electricity usage and such.
        
               | spencerflem wrote:
               | I feel like its less antivaxx, and more the anti nuclear
               | style movement. The antivaxxers are Just Wrong.
               | 
               | But for nuclear - there's certainly good uses for nuclear
               | power but its scary! and powers evil world ending bombs!
               | and if it goes wrong people end up secretly mutated and
               | irradiated and its all so awful and we should shut it
               | down now !!
               | 
               | And to be honest I don't know my own feelings on nuclear
               | power or "good" AI either, but I do get it when people
               | want to Shut it All Down Right Now !! Even if there is a
               | legitimate case for being genuinely useful to real
               | people.
        
           | bgwalter wrote:
           | DDT was a very successful insecticide that was outlawed due
           | to its adverse effects on humans.
        
             | absurdo wrote:
             | It didn't have a trillion dollar marketing campaign behind
             | it.
        
             | soulofmischief wrote:
             | I shouldn't have to tell you that producing, distributing
             | and using a toxic chemical that negatively affects the
             | earth and its biosphere are much, much different than
             | allowing people to train and use models for personal use.
             | This is a massive strawman and doesn't even deserve as much
             | engagement as I've given it here.
        
           | harimau777 wrote:
           | Have they come up with anything? So far I haven't seen any
           | solutions presented that are both politically viable and
           | don't result in people being even more under the thumb of
           | late stage capitalism.
        
             | soulofmischief wrote:
             | This is one of the most complicated issues humanity has
             | ever dealt with. Don't hold your breath, it's gonna be a
             | while. Society at large doesn't even have a healthy
             | relationship with the internet and mobile phones, these
             | advancements in artificial intelligence came at both a good
             | and awful time.
        
         | Workaccount2 wrote:
         | For me it threatens to be like a spell check. Back 20 years ago
         | when I was still in school and still hand writing for many
         | assignments, my spelling was very good.
         | 
         | Nowadays it's been a long time since my brain totally checked
         | out on spelling. Everything I write in every case has spell
         | check, so why waste neurons on spelling?
         | 
         | I fear the same will happen on a much broader level with AI.
        
           | kiitos wrote:
           | What? Who is spending any brain cycles on spelling? When you
           | write a word, you just write the word, the spelling is...
           | intrinsic? automatic? certainly not something that you have
           | to, like, actively think about?
        
             | username223 wrote:
             | ... until spellcheck gets "AI," and starts turning
             | correctly-spelled words into different words that it thinks
             | are more likely. (Don't get me started on "its" vs. "it's,"
             | which autocorrect frequently randomly incorrects.)
        
             | steveklabnik wrote:
             | I both agree and disagree, I don't regularly think about
             | spelling, but there are certain words I _know_ my brain
             | always gets wrong, so when I run into one of those, things
             | come crashing to a halt for a second while I try to
             | remember if I 'm still spelling them wrong or if I've
             | finally trained myself to do it correctly.
        
       | throwawaybob420 wrote:
       | It's not angst to see the people who run the companies we work
       | for "encourage" us to use Claude to write our code knowing full
       | well it's their attempt to see if they really can fire us without
       | a hit in "productivity".
       | 
       | It's not angst to see students throughout the entire spectrum end
       | up using ChatGPT to write their papers, summarize 3 paragraphs,
       | and use it to bypass any learning.
       | 
       | It's not angst to see people ask a question to an LLM and talk
       | what it says as gospel.
       | 
       | It's not angst to understand the environmental impact of all this
       | stupid fucking shit.
       | 
       | It's not angst to see the danger in generative AI not only just
       | creating slop, but further blurring the lines of real and fake.
       | 
       | It's not angst to see the vast amount of non-consensual porn
       | being generated of people without their knowledge.
       | 
       | Feel like I'm going fucking crazy here, just day after day of
       | people bowing down at the altar and legit not giving a single
       | fuck about what happens after rofl
        
         | bluefirebrand wrote:
         | Hey for what it's worth, you aren't alone
         | 
         | This is a really wild and unpredictable time, and it's ok to
         | see the problems looming and feel unsettled at how easily
         | people are ignoring the potential oncoming train
         | 
         | I would suggest taking some time for yourself to distance
         | yourself from this as much as you can for your own mental
         | health
         | 
         | Ride this out as best you can until things settle down a bit.
         | You aren't alone
        
       | timr wrote:
       | > On the money side? I don't see how the math and the capex work.
       | And all the time, I think about the carbon that's poisoning the
       | planet my children have to live on.
       | 
       | The "math and capex" are inextricably intertwined with "the
       | carbon". If these tools have some value, then we can finally
       | invest in forms of energy (i.e. nuclear) that will solve the
       | underlying problem, and we'll all be better off. If the tools
       | have no net value at a market-clearing price for energy (as
       | purported), then it won't be a problem.
       | 
       | I mean, maybe the productive way to say this is that we should
       | more formally link the environmental cost of energy production to
       | the market cost of energy. But as phrased (and I _suspect_ ,
       | implied), it sounds like "people who use LLMs are just
       | _profligate consumers_ who don 't care about the environment the
       | way that _I_ do, " and that any societal advancement that
       | consumes energy (as most do) is subject to this kind of
       | generalized luddite criticism.
        
         | lyu07282 wrote:
         | > If these tools have some value, then we can finally invest in
         | forms of energy (i.e. nuclear) that will solve the underlying
         | problem
         | 
         | I'm confused what you are saying, do you suggest "the market"
         | will somehow do something to address climate change? By what
         | mechanism? And what do LLMs have to do with that?
         | 
         | The problem with LLMs is that they require exorbitant amounts
         | of energy and fresh water to operate, driving a global increase
         | in ecological destruction and carbon emissions. [
         | https://www.greenmemag.com/science-technology/googles-contro...
         | ]
         | 
         | That's not exactly a new thing, just making the problem worse.
         | What is now different with LLMs as opposed to for example
         | crypto mining?
        
           | timr wrote:
           | > I'm confused what you are saying, do you suggest "the
           | market" will somehow do something to address climate change?
           | By what mechanism? And what do LLMs have to do with that?
           | 
           | No, I'm suggesting that the market will take care of the
           | cost/benefit equation, and that the externalities are part of
           | the costs. We could always do a better job of making sure
           | that costs capture these externalities, but that's not the
           | same thing as what the author seems to be saying.
           | 
           | (Also I'm saying that we need to get on with nuclear already,
           | but that's a secondary point.)
           | 
           | > The problem with LLMs is that they require exorbitant
           | amounts of energy and fresh water to operate, driving a
           | global increase in ecological destruction and carbon
           | emissions.
           | 
           | They no more "require" this, than operating an electric car
           | "requires" the same thing. While there may be environmental
           | extremists who advocate for a wholesale elimination of cars,
           | most sane people would be _happy_ for the balance between
           | cost and benfit represented by electric cars. _Ergo,_ a
           | similar balance must exist for LLMs.
        
             | lyu07282 wrote:
             | > I'm suggesting that the market will take care of the
             | cost/benefit equation, and that the externalities are part
             | of the costs.
             | 
             | You believe that climate change is an externality that the
             | market is capable of factoring in the cost/benefit
             | equation. Then I don't understand why you disagreed with
             | the statement "the market will somehow do something to
             | address climate change". There is a more fundamental
             | disagreement here.
             | 
             | You said:
             | 
             | > If these tools [LLMs/ai] have some value, then we can
             | finally invest in forms of energy (i.e. nuclear) that will
             | solve the underlying problem
             | 
             | And again, why? By what mechanism? Let's say Microsoft 10x
             | it's profit through AI, then it will "finally invest in
             | forms of energy (i.e. nuclear) that will solve the
             | underlying problem". But why? Why would it? Why do you say
             | "we" if we talk about the market.
        
       | schmichael wrote:
       | > I really don't think there's a coherent pro-genAI case to be
       | made in the education context.
       | 
       | I think it's simple: the reign of the essay is over. Educators
       | must find a new way to judge a student's understanding.
       | 
       | Presentations, artwork, in class writing, media, discussions and
       | debates, skits, even good old fashioned quizzes all still work
       | fine for getting students to demonstrate understanding.
       | 
       | As the son of two teachers I remember my parents spending hours
       | in the evenings grading essays. While writing is a critical
       | skill, and essays contain a good bit of information, I'm not sure
       | education wasn't overindexing on them already. They're easy to
       | assign and grade, but there's so much toil on both ends unrelated
       | to the core subject matter.
        
         | thadt wrote:
         | I posit that of the various uses of student writing, the most
         | important isn't communication or even assessment, but
         | synthesis. Writing forces you to grapple with a subject in a
         | way that clarifies your thinking. It's easy to think you
         | understand something until you have to explain or apply it.
         | 
         | Skipping that entirely, or using a LLM to do most of it for
         | you, skips something rather important.
        
           | schmichael wrote:
           | > Writing forces you
           | 
           | I agree entirely with you except for the word "forces."
           | Writing _can_ cause synthesis. It should. It should be graded
           | to encourage that...
           | 
           | ...but all of that is a whole lot of work for everyone
           | involved: student and teacher alike.
           | 
           | And that kind of synthesis is in no way unique to essays! All
           | of the other mediums I mention can make synthesis more
           | readily apparent then paragraphs of (often very low quality)
           | prose. A clever meme lampooning the "mere merchant" status of
           | the Medici family could demonstrate a level of understanding
           | that would take paragraphs of prose to convey.
        
         | ryandrake wrote:
         | I'd also say that the era of graded homework in general is
         | over, and using "proof of toil" assignments as a meaningful
         | measurement of a student's progress/mastery.
        
       | bgwalter wrote:
       | I notice a couple of things in the pro-AI [1] posts: All start
       | writing in a lengthy style like Steve Yegge in his peak. All are
       | written by ex-programmers who are on the management/founder side
       | now. All of them cite programmer friends who claim that AI is
       | useful.
       | 
       | It is very strange that no real open source project uses "AI" in
       | any way. Perhaps these friends work on closed source and say what
       | their manager wants them to say? Or they no longer care? Or they
       | work in "AI" companies?
       | 
       | [1] He does mention return on investment doubts and waste of
       | energy, but claims that the agent nonsense works (without public
       | evidence).
        
         | bwfan123 wrote:
         | There is a large number of wannabe hands-on coders who have
         | moved on to become management - and they all either have coder-
         | envy or coder-hatred.
         | 
         | To them, gen-ai is a savior - Earlier, they felt out of the
         | game - now, they feel like they can compete. Earlier they were
         | wannabe coders. Now they are legit.
         | 
         | But, this will last only until they accept a chunk of code put
         | out by co-pilot and then spend the next 2 days wrangling with
         | it. At that point, it dawns on them what these tools can
         | actually do.
        
         | orangecat wrote:
         | I'm a programmer, not a manager. I don't have a blog. AI is
         | useful.
         | 
         |  _It is very strange that no real open source project uses "AI"
         | in any way._
         | 
         | How do you know? Given the strong opposition that lots of
         | people have I wouldn't expect its use to be actively
         | publicized. But yes, I would expect that plenty of open source
         | contributors are at the very least using Cursor-style tab
         | completion or having AIs generate boilerplate code.
         | 
         |  _Perhaps these friends work on closed source and say what
         | their manager wants them to say?_
         | 
         | "Everyone who disagrees with me is paid to lie" is a really
         | tiresome refrain.
        
         | rjsw wrote:
         | At least in my main open source project, use of AI is
         | prohibited due to potentially tainting the codebase with stuff
         | derived from other GPL projects.
        
         | zurfer wrote:
         | Using AI in real projects is not super simple but if you lean
         | into it, it can accelerate things.
         | 
         | Anecdotally check this out
         | https://github.com/antiwork/gumroad/graphs/contributors
         | 
         | Devin is an AI agent
        
         | cesarb wrote:
         | > It is very strange that no real open source project uses "AI"
         | in any way.
         | 
         | Using genAI is particularly hard on open source projects due to
         | worries about licensing: if your project is under license X,
         | you don't want to risk including any code with a license
         | incompatible with X, or even under a license compatible with X
         | but without the correct attribution.
         | 
         | It's still not settled whether genAI can really "launder" the
         | license of the code in its training set, or whether legal
         | theories like "subconscious copying" would apply. In the later
         | case, using genAI could be very risky.
        
       | thadt wrote:
       | On Learning:
       | 
       | My wife, a high school teacher, remarked to me the other day "you
       | know, it's sad that my new students aren't going to be able to do
       | any of the fun online exercises that I used to run."
       | 
       | She's all but entirely removed computers from her daily class
       | workflow. Almost to a student, "research" has become "type it
       | into Google and write down whatever the AI spits out at the top
       | of the page" - no matter how much she admonishes them not to do
       | it. We don't even need to address what genAI does to their
       | writing assignments. She says this is prevalent across the board,
       | both in middle and high school. If educators don't adapt rapidly,
       | this is going to hit us hard and fast.
        
       | nikolayasdf123 wrote:
       | > Go programming language is especially well-suited to LLM-driven
       | automation. It's small, has a large standard library, and a
       | culture that has strong shared idioms for doing almost anything
       | 
       | +1 to this. thank you `go fmt` for uniform code. (even culture of
       | uniform test style!). thank you culture of minimal dependencies.
       | and of course go standard library and static/runtime tooling.
       | thank you simple code that is easy to write for humans..
       | 
       | and as it turns out for AIs too.
        
         | icedchai wrote:
         | I have found LLMs (mainly using Claude) are, indeed, excellent
         | at spitting out Go boilerplate.
        
         | zenlikethat wrote:
         | I found that bit slightly ironic because it always seems to
         | produce slightly cringy Go code for me that might get the job
         | done but skips over some of the usual design philosophies like
         | use of interfaces, channels, and context. But for many parts,
         | yeah, I've been very satisfied with Go code gen.
        
       | greybox wrote:
       | This is probably the best opinion piece I've read so far on GenAI
        
         | flufluflufluffy wrote:
         | Yep, basically sums up all of my thoughts about ai perfectly,
         | especially the environmental impact.
        
       | greybox wrote:
       | > horrifying survey of genAI's impact on secondary and tertiary
       | education.
       | 
       | I agree with this. It's probably terrible for structured
       | education for our children.
       | 
       | The one and only one caveat: Self-Driven language learning
       | 
       | The one and only actual use (outside of generating funny memes)
       | I've had from any LLM so far, is language learning. That I would
       | pay for. Not $30/pcm mind you . . . but something. I ask the
       | model to break down a target language sentence for me, explaining
       | each and every grammar point, and it does so very well. sometimes
       | even going to explain the cultural relevance of certain phrases.
       | This is great.
       | 
       | I've not found any other use for it yet though. As a game engine
       | programmer (C++) The code I write now a days quite deliberate and
       | relatively little compared to a web-developer (I used to be one,
       | I'm not pooping on web devs). so if we're talking about the
       | time/cost of having me as a developer work on the game engine,
       | I'm not saving any time or money by first asking Claude to type
       | what I was going to type anyway. And it's not advanced enough yet
       | to hold the context of our entire codebases spanning multiple
       | components.
       | 
       | Edit, Migaku [https://migaku.com/] is a great language learning
       | application that uses this
       | 
       | As OP, I'm not sure it's worth all that CO2 we're pumping into
       | our atmosphere.
        
         | Alex-Programs wrote:
         | AI progress has also made high quality language translation a
         | lot cheaper. When I started https://nuenki.app last year, the
         | options were exorbitantly priced DeepL for decent quality low
         | latency translation or Sonnet for slightly cheaper, much
         | slower, but higher quality translation.
         | 
         | Now, just a year later, DeepL is beaten by open models served
         | by https://groq.com for most languages, and Claude 4 / GPT-4.1
         | / my hybrid LLM translator (https://nuenki.app/translator)
         | produce practically perfect translations.
         | 
         | LLMs are also better at critiquing translations than producing
         | them, but pre-thinking doesn't help at all, which is just
         | fascinating. Anyway, it's a really cool topic that I'll happily
         | talk at length about! They've made so much possible. There's a
         | blog on the website, if anyone's curious.
        
       | Havoc wrote:
       | > I think about the carbon that's poisoning the planet my
       | children have to live on.
       | 
       | Tbh I think we're going to need a big breakthrough to fix that
       | anyway. Like fusion etc.
       | 
       | A bit less proompting isnt going to save the day
       | 
       | That's not to say one shouldn't be mindful. Just think it's no
       | longer enough
        
       | jillesvangurp wrote:
       | I think the concerns about climate and CO2 emissions are valid
       | but not a show stopper. The big picture here is that we are
       | living through two amazing revolutions at the same time:
       | 
       | 1) The emergence of LLMs and AIs that have turned the Turing test
       | from science fiction into basically irrelevant. AI is improving
       | at an absolutely mind boggling rate.
       | 
       | 2) The transition from fossil fuel powered world to a world that
       | will be net zero in few decades. The pace in the last five years
       | has been amazing. China is basically rolling out amounts of solar
       | and batteries that were unthinkable in even the most optimistic
       | predictions a few years ago. The rest of the world is struggling
       | to keep up and that's causing some issues with some countries
       | running backward (mainly the US).
       | 
       | It's true that a lot of AI is powered by mix of old coal plants,
       | cheap Texan gas and a few other things that aren't sustainable
       | (or cheap if you consider the cleanup cost). However, I live in
       | the EU and we just got cut off from cheap Russian gas, are now
       | running on imported expensive gas (e.g. from Texas) and have some
       | pet peeves about data sovereignty that are causing companies like
       | OpenAI, Meta, and Google to have to use local data centers for
       | serving their European users. Which means that stuff is being
       | powered with electricity that is locally supplied with a mix of
       | old dirty legacy infrastructure and new more or less clean
       | infrastructure. That mix is shifting rapidly towards renewables.
       | 
       | The thing is that old dirty infrastructure has been on a downward
       | trajectory for years. There are not a lot of new gas plants being
       | built (LNG is not cheap) and coal plants are going extinct in a
       | hurry because they are dirty and expensive to operate. And the
       | few gas plants that are still being built are in stand by mode
       | much of the time and losing money. Because renewables are
       | cheaper. Power is expensive here but relatively clean. The way to
       | get prices down is not to import more LNG and burn it but to do
       | the opposite.
       | 
       | What I like about things that increase demand for electricity is
       | that they generate investments in providing solutions to clean
       | energy and actually accelerate. The big picture here is that the
       | transition to net zero is going to vastly increase demands on
       | power grids. If you add up everything needed for industry,
       | transport, domestic and industrial heating, aviation, etc. it's a
       | lot. But the payoffs are also huge. People think of this as cost.
       | That's short term thinking. The big picture here is long term.
       | And the payoff is net zero and cheap power making energy
       | intensive things both affordable and sustainable. We're not there
       | yet but we're on a path towards that.
       | 
       | For AI that means, yes, we need a lot of TW of power and some of
       | the uses of AI seem frivolous and not that useful. But the big
       | picture is that this is changing a lot of things as well. I see
       | power needs as a challenge rather than a problem or reason to sit
       | on our hands. It would be nice if that power was cheap. It so
       | happens that currently the cheapest way to generate power happens
       | to be through renewables. I don't think dirty power is long term
       | smart, profitable, or necessary. And we could definitely do more
       | to speed up its demise. But at the same time, this increased
       | pressure on our grids is driving the very changes we need to make
       | that happen.
        
       | swyx wrote:
       | > Just to be clear, I note an absence of concern for cost and
       | carbon in these conversations. Which is unacceptable. But let's
       | move on.
       | 
       | hold on, its very simple. here's a oneliner even degrowthers
       | would love: extra humans cost a lot more in money and carbon than
       | it cost to have an llm spin up and down to do this work that
       | would otherwise not get done.
        
       | spacephysics wrote:
       | I disagree with genAI not having an education use case.
       | 
       | I think a useful LLM for education would be one with heavy
       | guardrails, which is "forced" to provide step-by-step back and
       | forth tutoring instead of just giving out answers.
       | 
       | Right now hallucinations would be problematic, but assuming its
       | in a domain like Math (and maybe combined with something like
       | Wolfram to verify outputs), i could see this theoretical tool
       | being very helpful to learning mathematics, or even other
       | sciences.
       | 
       | For more open-ended subjects like english, history, etc then it
       | may be less useful.
       | 
       | Perhaps only as a demonstration, maybe an LLM is prompted to
       | pretend to be a peasant from Medieval Europe, and with text to
       | voice we could have students as a group interact with and ask
       | questions of the LLM. In this case, maybe the LLM is only trained
       | on historical text from specific time periods, with settings to
       | be more deterministic and reduce hallucinations
        
       | prmph wrote:
       | I finally tried Claude Code for most of last week on a toy
       | Typescript project of moderate complexity. It's supposedly the
       | pinnacle of agentic coding assistants, and I tend to agree,
       | finding it far ahead of Copilot et al. Seeing it working was like
       | a bit of magic, and it was very addictive. It successfully
       | distracted me from my main projects that I code mostly by hand.
       | 
       | That said, and it's kind of hard to express this well, not only
       | is the actual productivity still far from what the hype suggests,
       | but I regard agentic coding to be like a bad addictive drug right
       | now. The promise of magic from the agent is always just seems
       | around the corner: just one more prompt to finally fix the rough
       | edges of what it has spat out, just one more helpful hint to put
       | it on the right path/approach, just one more reminder for it to
       | actually apply everything in CLAUDE.md each time...
       | 
       | Believe it or not, I spent several days with it, crafting very
       | clear and specific prompts, prodding with all kinds of hints,
       | even supplying it with legacy code that mostly works (although
       | written in CSharp), and at the end it had written a lot of that
       | almost works, except a lot of simple things just wouldn't work,
       | not matter how much time I spent with it.
       | 
       | In the end, after a couple of hours of writing the code myself, I
       | had a high a quality type design and basic logic, and a clear
       | path to implementing the all the basic features.
       | 
       | So, I don't know, for now even Claude seems mostly useful only as
       | a sporadic helper within small contexts (drafting specific
       | functions, code review of moderate amounts of code, relatively
       | simple refactoring, etc). I believe knowing when AI would help vs
       | slow you down is becoming key.
       | 
       | For this tech to improve, maybe a genetic/evolutionary approach
       | would be needed. Given a task, the agent should launch several
       | models to work on the problem, with each model also launching
       | several randomized approaches to working on the problem. Then the
       | agent should evaluate all the responses and pick the "best" one
       | to return.
        
       | keybored wrote:
       | > My input stream is full of it: Fear and loathing and
       | cheerleading and prognosticating on what generative AI means and
       | whether it's Good or Bad and what we should be doing. All the
       | channels: Blogs and peer-reviewed papers and social-media posts
       | and business-news stories. So there's lots of AI angst out there,
       | but this is mine. I think the following is a bit unique because
       | it focuses on cost, working backward from there. As for the genAI
       | tech itself, I guess I'm a moderate; there is a there there, it's
       | not all slop.
       | 
       | Let's see.
       | 
       | > But, while I have a lot of sympathy for the contras and am
       | sickened by some of the promoters, at the moment I'm mostly in
       | tune with Thomas Ptacek's My AI Skeptic Friends Are All Nuts.
       | It's long and (fortunately) well-written and I (mostly) find it
       | hard to disagree with.
       | 
       | So the Moderate is a Believer. But it's offset by being concerned
       | about The Climate and The Education and The Investments.
       | 
       | You can try to write a self-aware/moment-aware intro. It's the
       | same fodder for the front page.
        
       ___________________________________________________________________
       (page generated 2025-06-09 23:02 UTC)