[HN Gopher] How University Students Use Claude
       ___________________________________________________________________
        
       How University Students Use Claude
        
       Author : pseudolus
       Score  : 164 points
       Date   : 2025-04-09 15:41 UTC (7 hours ago)
        
 (HTM) web link (www.anthropic.com)
 (TXT) w3m dump (www.anthropic.com)
        
       | stv_123 wrote:
       | Interesting article, but I think it downplays the incidence of
       | students using Claude as an alternative to building foundational
       | skills. I could easily see conversations that they outline as
       | "Collaborative" primarily being a user walking Claude through
       | multi-part problems or asking it to produce justifications for
       | answers that students add to assignments.
        
         | yieldcrv wrote:
         | > I think it downplays the incidence of students using Claude
         | as an alternative to building foundational skills
         | 
         | I think people will get more utility out of education programs
         | that allow them to be productive with AI, at the expense of
         | foundational knowledge
         | 
         | Universities have a different purpose and are tone deaf to why
         | their students use universities for the last century: which is
         | that the corporate sector decided university degrees were
         | necessary despite 90% of the cross disciplinary learning being
         | irrelevant.
         | 
         | Its not the university's problem and they will outlive this
         | meme of catering to the middle class' upwards mobility at all.
         | They existed before and will exist after.
         | 
         | The university may never be the place for a human to hone the
         | skill of being augmented with AI but a trade school or bootcamp
         | or other structured learning environment will be, for those not
         | self started enough to sit through youtube videos and trawl
         | discord servers
        
           | fallinditch wrote:
           | Yes, AI tools have shifted the education paradigm and
           | cognition requirements. This is a 'threat' to universities,
           | but I would also argue that it's an opportunity for
           | universities to reinvent the experience of further education.
        
             | ryandrake wrote:
             | Yea, the solution here is to embrace the reality that these
             | tools exist and will be used regardless of what the
             | university wants, and use it as an opportunity to level
             | _up_ the education and experience.
             | 
             | The clueless educational institutions will simply try to
             | fight it, like they tried to fight copy/pasting from Google
             | and like they probably fought calculators.
        
         | mppm wrote:
         | > Interesting article, but I think it downplays the incidence
         | of students using Claude as an alternative to building
         | foundational skills.
         | 
         | No shit. This is anecdotal evidence, but I was recently
         | teaching a university CS class as a guest lecturer (at a
         | somewhat below-average university), and almost all the students
         | were basically copy-pasting task descriptions and error
         | messages into ChatGPT in lieu of actually programming. No one
         | seemed to even read the output, let alone be able to explain
         | it. "Foundational skills" were near zero, as a result.
         | 
         | Anyway, I strongly suspect that this report is based on careful
         | whitewashing and would reveal 75% cheating if examined more
         | closely. But maybe there is a bit of sampling bias at play as
         | well -- maybe the laziest students just never bother with
         | anything but ChatGPT and Google Colab, while students using
         | Claude have a little more motivation to learn something.
        
           | colonial wrote:
           | CS/CE undergrad here who entered university right when
           | ChatGPT hit. Things are _bad_ at my large state school.
           | 
           | People who spent the past two years offloading their entry-
           | level work onto LLMs are now taking 400-level systems
           | programming courses and running face-first into a capability
           | wall. I try my best to help, but there's only so much I can
           | do when basic concepts like structs and pointer manipulation
           | get blank stares.
           | 
           | > "Oh, the foo field in that struct should be signed instead
           | of unsigned."
           | 
           | < "Struct?"
           | 
           | > "Yeah, the type definition of Bar? It's right there."
           | 
           | < "Man, I had ChatGPT write this code."
           | 
           | > "..."
        
             | jjmarr wrote:
             | Put the systems level programming in year 1, honestly.
             | Either you know the material going in, or you fail out.
        
         | tmpz22 wrote:
         | Direct quote I heard from an undergrad taking statistics:
         | 
         | "Snapchat AI couldn't get it right so I skipped the assignment"
        
           | dvngnt_ wrote:
           | back in my day we used snap to send spicy photos now they're
           | using AI to cheat on homework. im not sure what's worse
        
           | moffkalast wrote:
           | Well if statistics can't understand itself, then what hope do
           | the rest of us have?
        
       | ilrwbwrkhv wrote:
       | AI bubble seems close to collapsing. God knows how many billions
       | have been invested and we still don't have an actual use case for
       | AI which is good for humanity.
        
         | boredemployee wrote:
         | I think I understand what you're trying to say.
         | 
         | We certainly improve productivity, but that is not necessarily
         | good for humanity. Could be even worse.
         | 
         | i.e.: my company already expect less time for some tasks given
         | that they _know_ I'll probably use some AI to do tasks. Which
         | means I can humanly handle more context in a given week if the
         | metric is "labour", but you end up with your brain completely
         | melted.
        
           | bluefirebrand wrote:
           | > We certainly improve productivity
           | 
           | I think this is really still up for debate
           | 
           | We produce more output certainly but if it's overall lower
           | quality than previous output is that really "improved
           | productivity"?
           | 
           | There has to be a tipping point somewhere, where faster
           | output of low quality work is actually decreasing
           | productivity due to the efforts now required to keep the
           | tower of garbage from toppling
        
             | fourseventy wrote:
             | It's not up for debate. Ask any programmer if LLMs improve
             | productivity and the answer is 100% yes.
        
               | AlexandrB wrote:
               | Meanwhile in this article/thread you have a bunch of
               | programmers complaining that LLMs don't improve overall
               | productivity:
               | https://news.ycombinator.com/item?id=43633288
        
               | bluefirebrand wrote:
               | I am a programmer and my opinion is that all of the AI
               | tooling my company is making me use gets in the way about
               | as often as it helps. It's probably overall a net
               | negative, because any code it produces for me takes
               | longer for me to review and ensure correctness as it
               | would to just write it
               | 
               | Does my opinion count?
        
           | DickingAround wrote:
           | I think the core of the 'improved productivity' question will
           | be ultimately impossible to answer. We would want to know if
           | productivity was improved over the lifetime of a society;
           | perhaps hundreds of years. We will have no clear A/B test
           | from which to draw causal relationships.
        
             | AlexandrB wrote:
             | This is exactly right. It also depends on how all the AGI
             | promises shake out. If AGI really does emerge soon, it
             | might not matter anymore whether students have any
             | foundational knowledge. On the other hand, if you still
             | need people to know stuff in the future, we might be
             | creating a generation of citizens incapable of doing the
             | job. That could be catastrophic in the long term.
        
         | amiantos wrote:
         | Your statement appears to be composed almost entirely of vague
         | and ambiguous statements.
         | 
         | "AI bubble seems close to collapsing" in response to an article
         | about AI being used as a study aid. Does not seem relevant to
         | the actual content of the post at all, and you do not provide
         | any proof or explanation for this statement.
         | 
         | "God knows how many billions have been invested", I am pretty
         | sure it's actually not that difficult to figure out how much
         | investor money has been poured into AI, and this still seems
         | totally irrelevant to a blog post about AI being used as a
         | study aid. Humans 'pour' billions of dollars into all sorts of
         | things, some of which don't work out. What's the suggestion
         | here, that all the money was wasted? Do you have evidence of
         | that?
         | 
         | "We still don't have an actual use case for AI which is good
         | for humanity"... What? We have a lot of use cases for AI, some
         | of which are good for humanity. Like, perhaps, as a study aid.
         | 
         | Are you just typing random sentences into the HN comment box
         | every time you are triggered by the mention of AI? Your post is
         | nonsense.
        
         | papichulo2023 wrote:
         | It is helping me do that projects that would otherwise take me
         | hours in just a few minutes, soooo, shrug.
        
           | user432678 wrote:
           | What kind of projects are those? I am genuinely curious. I
           | was excited by AI, Claude specifically, since I am an avid
           | procrastinator and would love to finish tens of projects I
           | have in mind. Most of those projects are games with
           | specifical constraints. I got disenchanted pretty quickly
           | when started actually using AI to help with different parts
           | of the game programming. Majority of problems I had are
           | related to poor understanding of generated code. I mean yes,
           | I read the code, fixed minor issues, but it always feels like
           | I don't really internalised the parts of the game which slows
           | me down quite significantly in a long run, when I need to
           | plan major changes. Probably a skill issue, but for now the
           | only thing AI is helpful for me is populating Jira
           | descriptions for my "big picture refactoring" work. That's
           | basically it.
        
             | noman-land wrote:
             | I was able to use llama.cpp and whisper.cpp to help me
             | build a transcription site for my favorite podcast[0]. I'm
             | a total python noob and hadn't really used sqlite before,
             | or really used AI before but using these tools, _completely
             | offline_ , llama.cpp helped me write a bunch of python and
             | sql to get the job done. It was incredibly fun and
             | rewarding and most importantly, it got rid of the dread of
             | not knowing.
             | 
             | 0 - https://transcript.fish
        
       | SamBam wrote:
       | I feel like Anthropic has an incentive to minimize how much
       | students use LLMs to write their papers for them.
       | 
       | In the article, I guess this would be buried in
       | 
       | > Students also frequently used Claude to provide technical
       | explanations or solutions for academic assignments
       | (33.5%)--working with AI to debug and fix errors in coding
       | assignments, implement programming algorithms and data
       | structures, and explain or solve mathematical problems.
       | 
       | "Write my essay" would be considered a "solution for academic
       | assignment," but by only referring to it obliquely in that
       | paragraph they don't really tell us the prevalence of it.
       | 
       | (I also wonder if students are smart, and may keep outright usage
       | of LLMs to complete assignments on a separate, non-university
       | account, not trusting that Anthropic will keep their
       | conversations private from the university if asked.)
        
         | radioactivist wrote:
         | Most of their categories have straightforward interpretations
         | in terms of students using the tool to cheat. They don't seem
         | to want to/care to analyze that further and determine which are
         | really cheating and which are more productive uses.
         | 
         | I think that's a bit telling on their motivations (esp. given
         | their recent large institutional deals with universities).
        
           | SamBam wrote:
           | Indeed. I called out the second-top category, but you could
           | look at the top category as well:
           | 
           | > We found that students primarily use Claude to create and
           | improve educational content across disciplines (39.3% of
           | conversations). This often entailed designing practice
           | questions, editing essays, or summarizing academic material.
           | 
           | Sure, throwing a paragraph of an essay at Claude and asking
           | it to turn it into a 3-page essay could have been categorized
           | as "editing" the essay.
           | 
           | And it seems pretty naked the way they lump "editing an
           | essay" in with "designing practice questions," which are
           | clearly very different uses, even in the most generous
           | interpretation.
           | 
           | I'm not saying that the vast majority of students _do_ use AI
           | to cheat, but I do want to say that, if they _did_ , you
           | could probably write this exact same article and tell no
           | lies, and simply sweep all the cheating under titles like
           | "create and improve educational content."
        
         | vunderba wrote:
         | Exactly. There's a big difference between a student having a
         | back-and-forth dialogue with Claude around _" the extent to
         | which feudalism was one of the causes of the French
         | Revolution."_, versus another student using their smartphone to
         | take a snapshot of the actual homework assignment, pasting it
         | into Claude and calling it a day.
        
           | PeterStuer wrote:
           | From what I could observe, the latter is endemic amongst high
           | school students. And don't kid yourself. For many it is just
           | a step up from copy/pasting the first Google result.
           | 
           | They never could be arsed to learn how to input their
           | assignments into Wolfram Alpha. It was always the ux/ui
           | effort that held them back.
        
       | j2kun wrote:
       | They use an LLM to summarize the chats, which IMO makes the
       | results as fundamentally unreliable as LLMs are. Maybe for an
       | aggregate statistical analysis (for the purpose of...vibe-based
       | product direction?) this is good enough, but if you were to use
       | this to try to inform impactful policies, caveat emptor.
        
         | j2kun wrote:
         | For example, it's fashionable in math education these days to
         | ask students to generate problems as a different mode of
         | probing understanding of a topic. And from the article: "We
         | found that students primarily use Claude to create and improve
         | educational content across disciplines (39.3% of
         | conversations). This often entailed designing practice
         | questions, ..." That last part smells fishy, and even if you
         | saw a prompt like "design a practice question..." you wouldn't
         | be able to know if they were cheating, given the context
         | mentioned above.
        
       | dtnewman wrote:
       | > A common question is: "how much are students using AI to
       | cheat?" That's hard to answer, especially as we don't know the
       | specific educational context where each of Claude's responses is
       | being used.
       | 
       | I built a popular product that helps teachers with this problem.
       | 
       | Yes, it's "hard to answer", but let's be honest... it's a very
       | very widespread problem. I've talked to hundreds of teachers
       | about this and it's a ubiquitous issue. For many students, it's
       | literally "let me paste the assignment into ChatGPT and see what
       | it spits out, change a few words and submit that".
       | 
       | I think the issue is that it's so tempting to lean on AI. I
       | remember long nights struggling to implement complex data
       | structures in CS classes. I'd work on something for an hour
       | before I'd have an epiphany and figure out what was wrong. But
       | that struggling was ultimately necessary to really learn the
       | concepts. With AI, I can simply copy/paste my code and say "hey,
       | what's wrong with this code?" and it'll often spot it (nevermind
       | the fact that I can just ask ChatGPT "create a b-tree in C" and
       | it'll do it). That's amazing in a sense, but also hurts the
       | learning process.
        
         | stv_123 wrote:
         | Yeah, the concept of "productive struggle" is important to the
         | education process and having a way to short circuit it seems
         | like it leads to worse learning outcomes.
        
           | umpalumpaaa wrote:
           | I am not sure all humans work the same way though. Some get
           | very very nervous when they begin to struggle. So nervous
           | that they just stop functioning.
           | 
           | I felt that during my time in university. I absolutely loved
           | reading and working through dense math text books but the
           | moment there was a time constraint the struggle turned into
           | chaos.
        
             | AlexandrB wrote:
             | > Some get very very nervous when they begin to struggle.
             | So nervous that they just stop functioning.
             | 
             | I sympathize, but it's impossible to remove all struggle
             | from life. It's better in the long run to work through this
             | than try to avoid it.
        
         | yapyap wrote:
         | > I think the issue is that it's so tempting to lean on AI. I
         | remember long nights struggling to implement complex data
         | structures in CS classes. I'd work on something for an hour
         | before I'd have an epiphany and figure out what was wrong. But
         | that struggling was ultimately necessary to really learn the
         | concepts. With AI, I can simply copy/paste my code and say
         | "hey, what's wrong with this code?" and it'll often spot it
         | (nevermind the fact that I can just ask ChatGPT "create a
         | b-tree in C" and it'll do it). That's amazing in a sense, but
         | also hurts the learning process.
         | 
         | In the end the willingness to struggle will set apart the truly
         | great Software Engineer from the AI-crutched. Now of course
         | this will most of the time not be rewarded, when a company
         | looks at two people and sees "passable" code from both but one
         | is way more "productive" with it (the AI-crutched engineer)
         | they'll inititally appreciate this one more.
         | 
         | But in the long run they won't be able to explain the choices
         | made when creating the software, we will see the retraction
         | from this type of coding when the first few companies' security
         | falls apart like a house of cards due to AI reliance.
         | 
         | It's basically the "instant gratification vs delayed
         | gratification" argument but wrapped in the software dev box.
        
           | JohnMakin wrote:
           | I don't wholly disagree with this post, but I'd like to add a
           | caveat, observing my own workflow with these tools.
           | 
           | I guess I'd qualify to you as someone "AI crutched" but I
           | mostly use it for research and bouncing ideas (or code
           | complete, which I've mentioned before - this is a great use
           | of the tool and I wouldn't consider it a crutch, personally).
           | 
           | For instance, "parse this massive log output, and highlight
           | anything interesting you see or any areas that may be a
           | problem, and give me your theories."
           | 
           | Lots of times its wrong. Sometimes its right. Sometimes, its
           | response gives me an idea that leads to another direction.
           | It's essentially how I was using google + stack overflow ten
           | years ago - see your list of answers, use your intuition,
           | knowledge, and expertise to find the one most applicable to
           | you, continue.
           | 
           | This "crutch" is essentially the same one I've always used,
           | just in different form. I find it pretty good at doing code
           | review for myself before I submit something more formal, to
           | catch any embarrassing or glaringly obvious bugs or incorrect
           | test cases. I would be wary of the dev that refused to use
           | tools out of some principled stand like this, just as I'd be
           | wary of a dev that overly relied on them. There is a balance.
           | 
           | Now, if all you know are these tools and the workflow you
           | described, yea, that's probably detrimental to growth.
        
         | vunderba wrote:
         | I've been calling this out since the rise of ChatGPT:
         | 
         | "The real danger lies in their seductive nature - over how
         | tempting it becomes to immediately reach for the LLM to provide
         | an answer, rather than taking a few moments to quietly ponder
         | the problem on your own. By reaching for it to solve any
         | problem at nearly an instinctual level you are completely
         | failing to cultivate an intrinsically valuable skill - that of
         | critical reasoning."
        
           | nonethewiser wrote:
           | Somewhat agree.
           | 
           | I agree in principal - the process of problem solving is the
           | important part.
           | 
           | However I think LLMs make you do more of this because of what
           | you can offload to the LLM. You can offload the simpler
           | things. But for the complex questions that cut across
           | multiple domains and have a lot of ambiguity? You're still
           | going to have to sit down and think about it. Maybe once
           | you've broken it into sufficiently smaller problems you can
           | use the LLM.
           | 
           | If we're worried about abstract problem solving skills that
           | doesnt really go away with better tools. It goes away when we
           | arent the ones using the tools.
        
             | Peritract wrote:
             | You can offload the simpler things, but struggling with the
             | simpler things is how you build the skills to handle the
             | more complex ones that you can't hand off.
             | 
             | If the simpler thing in question is a task you've already
             | mastered, then you're not losing much by asking an LLM to
             | help you with it. If it's not trivial to you though, then
             | you're missing an opportunity to learn.
        
               | jplusequalt wrote:
               | Couldn't have said it better myself.
               | 
               | The biology of the human brain will not change as a
               | result of these LLMs. We are imperfect and will tend to
               | take the easiest route in most cases. Having an "all
               | powerful" tool that can offload the important work of
               | figuring out tough problems seems like it will lead to a
               | society less capable in solving complex problems.
        
               | nonethewiser wrote:
               | If you haven't mastered it yet then its not a simple
               | thing.
               | 
               | Grandma will not be able to implement a simple add
               | function using python by asking chat gpt and copy
               | pasting.
        
         | bko wrote:
         | When modern search became more available, a lot of people said
         | there's no point of rote memorization as you can just do a
         | Google search. That's more or less accepted today.
         | 
         | Whenever we have a new technology there's a response "why do I
         | need to learn X if I can always do Y", and more or less, it has
         | proven true, although not immediately.
         | 
         | For instance, I'm not too concerned about my child's ability to
         | write very legibly (most writing is done on computers), spell
         | very well (spell check keeps us professional), reading a map to
         | get around (GPS), etc
         | 
         | Not that these aren't noble things or worth doing, but they
         | won't impact your life too much if you're not interest in
         | penmanship, spelling, or cartography.
         | 
         | I believe LLMs are different (I am still stuck in the moral
         | panic phase), but I think my children will have a different
         | perspective (similar to how I feel about memorizing poetry and
         | languages without garbage collection). So how do I answer my
         | child when he asks "Why should I learn to do X if I can just
         | ask an LLM and it will do it better than me"
        
           | nyeah wrote:
           | Spell check isn't really adequate. You get a page full of
           | correctly spelled words, but they're the wrong words.
        
             | walthamstow wrote:
             | Try being British, often they're not correctly spelt words
             | at all.
        
           | delusional wrote:
           | "More or less" is doing a lot of work there. School, at least
           | where I am, still spends the first year getting children to
           | memorize the order of the numbers from 1-20 and if there's an
           | even or odd number of a thing on a picture.
           | 
           | Do you google if 5 is less than 6 or do you just memorize
           | that?
           | 
           | If you believe that creativity is not based on a foundation
           | of memorization and experience (which is just memorization)
           | you need to reflect on the connection between those.
        
           | noitpmeder wrote:
           | This is an insane take.
           | 
           | The issue is that, when presented with a situation that
           | requires writing legibly, spelling well, or reading a map,
           | WITHOUT their AI assistants, they will fall apart.
           | 
           | The AI becomes their brain, such that they cannot function
           | without it.
           | 
           | I'd never want to work with someone who is this reliant on
           | technology.
        
             | Vvector wrote:
             | Do you have the skills and knowledge to survive like a
             | pioneer from 200 years ago?
             | 
             | Technology is rapidly changing humanity. Maybe for the
             | worse.
        
               | tux1968 wrote:
               | Indeed. More people need to grow their own vegetables. AI
               | may undermine our ability for high level abstract
               | thought, but industrial agriculture already represents an
               | existential threat, should it be interrupted for any
               | reason.
        
               | kibwen wrote:
               | Knowledge itself is the least concern here. Human society
               | is extremely good at transmitting information. More
               | difficult to transmit are things like critical thinking
               | and problem-solving ability. Developing meta-cognitive
               | processes like the latter are the real utility of
               | education.
        
             | bko wrote:
             | Maybe 40 years ago there were programmers that would not
             | work with anyone that use IDEs or automated memory
             | management. When presented with a programming task that
             | requires these things and you're WITHOUT your IDE or
             | whatever, they will fall apart.
             | 
             | Look, I agree with you, I'm just trying to articulate to
             | someone why they should learn X if they believe an LLM
             | could help them and "an LLM won't always be around" isn't a
             | good argument, because lets be honest, it likely will. This
             | is the same thing as "you won't walk around all day with a
             | calculator in your pocket so you need to learn math"
        
               | Hasu wrote:
               | > This is the same thing as "you won't walk around all
               | day with a calculator in your pocket so you need to learn
               | math"
               | 
               | People who can't do simple addition and multiplication
               | without a calculator (12*30 or 23 + 49) are absolutely at
               | a disadvantage in many circumstances in real life and I
               | don't see how you could think this isn't true. You can't
               | work as a cashier without this skill. You can't play
               | board games. You can't calculate tips or figure out how
               | much you're about to spend at the grocery store. You
               | could pull out your phone and use a calculator in all
               | these situations, but people don't.
        
               | dwaltrip wrote:
               | You are also likely to be more vulnerable to financial
               | mishaps and scams.
        
               | gwervc wrote:
               | A lot of developers of my generation (30+) learned to
               | program within a code editor and compile their project in
               | command line. Remove the IDE and we can still code.
               | 
               | On the other hand my master 2 students, most of which
               | learned scripting in the previous year, can't even split
               | a project in multiple files after being explained
               | multiple times. Some have more knowledge and ability than
               | others, but a signifiant fraction is just copy-pasting
               | LLM output to solve whatever is asked from them instead
               | of trying to do it themselves, or asking questions.
        
               | rurp wrote:
               | I think the risk isn't just that LLMs won't exist, but
               | that they will fail at certain tasks that need to get
               | done. Someone who is highly dependent on prompt
               | engineering and doesn't understand any of the underlying
               | concepts is going to have a bad time with problems they
               | can't prompt their way out of.
               | 
               | This is something I see with other tools. Some people get
               | highly dependent on things like advanced IDE features and
               | don't care to learn how they actually work. That works
               | fine most of the time but if they hit a subtle edge case
               | they are dead in the water until someone else bails them
               | out. In a complicated domain there are always edge cases
               | out there waiting to throw a wrench in things.
        
             | mbesto wrote:
             | Do you work with people who can multiply 12.3% * 144,005.23
             | rapidly without a calculator?
             | 
             | > The issue is that, when presented with a situation that
             | requires writing legibly, spelling well, or reading a map,
             | WITHOUT their AI assistants, they will fall apart.
             | 
             | The parent poster is positing that for 90% of cases they
             | WILL have their AI assistant because its in their pocket,
             | just like a calculator. It's not insane to think that and
             | its a fair point to ponder.
        
           | HDThoreaun wrote:
           | It's all about critical thinking. The answer to your kid is
           | that LLMs are a tool and until they run the entire economy
           | there will still need to be people with critical thinking
           | skills making decisions. Not every task at school helps hone
           | critical thinking but many of them do.
        
           | dingnuts wrote:
           | > That's more or less accepted today.
           | 
           | Bullshit! You cannot do second order reasoning with a set of
           | facts or concepts that you have to look up first.
           | 
           | Google Search made intuition and deep understanding and
           | encyclopedic knowledge MORE important, not less.
           | 
           | People will think you are a wizard if you read documentation
           | and bother to remember it, because they're still busy asking
           | Google or ChatGPT while you're happily coding without pausing
        
             | vonneumannstan wrote:
             | I am 100% certain people said the same thing about
             | arithmetic and calculators and now mental arithmetic skill
             | is nothing more than a curiosity.
        
               | dwaltrip wrote:
               | I encourage you to reconsider.
               | 
               | Mental math is essential for having strong numerical
               | fluency, for estimation, and for reasoning about many
               | systems. Those skills are incredibly useful for thinking
               | critically about the world.
        
               | joe5150 wrote:
               | Being able to do basic math in your head is valuable just
               | in terms of basic practicality (quickly calculating a tip
               | or splitting a bill, doubling a recipe, reasoning about a
               | budget...), but this is a poor analogy anyway because 3x2
               | is still 3x2 regardless of how you get there whereas
               | creative work produced by software is worthless.
        
             | joe5150 wrote:
             | > Google Search made intuition and deep understanding and
             | encyclopedic knowledge MORE important, not less.
             | 
             | Not to mention discernment and info literacy when you do
             | need to go to the web to search for things. AI content slop
             | has put everybody who built these skills on the back foot
             | again, of course.
        
           | kibwen wrote:
           | The irreducible answer to "why should I" is that it makes you
           | ever-more-increasingly reliant on a teetering tower of
           | fragile and interdependent supply chains furnished by for-
           | profit companies who are all too eager to rake you over the
           | coals to fulfill basic cognitive functions.
           | 
           | Like, Socrates may have been against writing because he
           | thought it made your memory weak, but at least I, an
           | individual, am perfectly capable of manufacturing my own
           | writing implements with a modest amount of manual labor and
           | abundantly-available resources (carving into wood, burning
           | wood into charcoal to write on stone, etc.). But I ain't
           | perfectly capable of doing the same to manufacture an
           | integrated circuit, let alone a digital calculator, let alone
           | a GPU, let alone an LLM. Anyone who delegates their thought
           | to a corporation is permanently hitching their fundamental
           | ability to think to this wagon.
        
             | notyourwork wrote:
             | Although I agree, convincing children to learn using that
             | rationalization won't work.
        
               | bigstrat2003 wrote:
               | Yes it does. Plenty of children accept "you won't always
               | have (tool)" as a reason for learning.
        
               | jplusequalt wrote:
               | All adults were once children and there are plenty of
               | adults who cannot read beyond a middle school reading
               | level or balance a simple equation. This has been a
               | problem before we ever gave them GPTs. It stands to
               | reason it will only worsen in a future dominated by them.
        
               | freeone3000 wrote:
               | "You won't always have a calculator" became moderately
               | false to laughably false as I went from middle to high
               | school. Every task I will ever do for money will be done
               | on a computer.
               | 
               | I'm still garbage at arithmetic, especially mental math,
               | and it really hasn't inhibited my career in any way.
        
             | hackyhacky wrote:
             | > The irreducible answer to "why should I" is that it makes
             | you ever-more-increasingly reliant on a teetering tower of
             | fragile and interdependent supply chains furnished by for-
             | profit companies who are all too eager to rake you over the
             | coals to fulfill basic cognitive functions.
             | 
             | Yes, but that horse has long ago left the barn.
             | 
             | I don't know how to grow crops, build a house, tend
             | livestock, make clothes, weld metal, build a car, build a
             | toaster, design a transistor, make an ASIC, or write an OS.
             | I _do_ know how to write a web site. But if I cede that
             | skill to an automated process, then _that_ is the feather
             | that will break the camel 's back?
             | 
             | The history of civilization is the history of
             | specialization. No one can re-build all the tools they rely
             | on from scratch. We either let other people specialize, or
             | we let machines specialize. LLMs are one more step in the
             | latter.
             | 
             | The Luddites were right: the machinery in cotton mills was
             | a direct threat to their livelihood, just as LLMs are now
             | to us. But society marches on, textile work has been
             | largely outsourced to machines, and the descendants of the
             | Luddites are doctors and lawyers (and coders). 50 years
             | from new the career of a "coder" will evoke the same
             | historical quaintness as does "switchboard operator" or
             | "wainwright."
        
               | ryandrake wrote:
               | This reply brings to mind the well-known Heinlein quote:
               | A human being should be able to change a diaper, plan an
               | invasion, butcher a hog, conn a ship, design a building,
               | write a sonnet, balance accounts, build a wall, set a
               | bone, comfort the dying, take orders, give orders,
               | cooperate, act alone, solve equations, analyze a new
               | problem, pitch manure, program a computer, cook a tasty
               | meal, fight efficiently, die gallantly. Specialization is
               | for insects.
        
               | marcosdumay wrote:
               | The sheer amount of activities that he left out because
               | he couldn't even remember they existed would turn this
               | paragraph into a book.
        
               | DrillShopper wrote:
               | This is a fantastic and underrated quote, despite all of
               | the problems I have with Heinlein's fascism-glorifying
               | work.
        
               | crooked-v wrote:
               | That's a quote that sounds great until, say, that self-
               | built building by somebody who's neither engineer nor
               | architect at best turns out to have some intractible
               | design flaw and at worst collapses and kills people.
               | 
               | It's also a quote from a character who's literally
               | immortal and so has all the time in the world to learn
               | things, which really undermines the premise.
        
               | Mawr wrote:
               | What an awful quote. Literally all progress we've made is
               | due to ever increasing specialization.
        
               | djhn wrote:
               | It's a quote from a character in Heinlein's fiction. A
               | human character with a lifespan of over a thousand years.
               | 
               | I too liked that quote and found it inspiring. Until I
               | read the book, that is.
        
               | theLiminator wrote:
               | I think removing pointless cognitive load makes sense,
               | but the point of an education is to learn how to
               | think/reason. Maybe if we get AGI there's no point
               | learning that either, but it is definitely not great if
               | we get a whole generation who skip learning how to
               | problem solve/think due to using LLMs.
               | 
               | IMO it's quite different than using a calculator or any
               | other tool. It can currently completely replace the human
               | in the loop, whereas with other tools they are generally
               | just a step in the process.
        
               | hackyhacky wrote:
               | > IMO it's quite different than using a calculator or any
               | other tool. It can currently completely replace the human
               | in the loop, whereas with other tools they are generally
               | just a step in the process.
               | 
               | The (as yet unproven) argument for the use of AIs is that
               | using AI to solve simpler problems allows us humans to
               | focus on the big picture, in the same way that letting a
               | calculator solve arithmetic gives us flexibility to
               | understand the math behind the arithmetic.
               | 
               | No one knows if that's true. We're running a grand
               | experiment: the next generation will either surpass us in
               | grand fashion using tools that we couldn't imagine, or
               | will collapse into a puddle of ignorant consumerism, a la
               | Wall-E.
        
               | jplusequalt wrote:
               | >the next generation will either surpass us in grand
               | fashion using tools that we couldn't imagine, or will
               | collapse into a puddle of ignorant consumerism, a la
               | Wall-E
               | 
               | Seeing how the world is based around consumerism, this
               | future seems more likely.
               | 
               | HOWEVER, we can still course correct. We need to
               | organize, and get the hell off social media and the
               | internet.
        
               | hackyhacky wrote:
               | > HOWEVER, we can still course correct. We need to
               | organize, and get the hell off social media and the
               | internet.
               | 
               | Given what I know of human nature, this seems improbable.
        
               | jplusequalt wrote:
               | I think it's possible. I think the greatest trick our
               | current societal structure ever managed to pull, is the
               | proliferation of the belief that any alternatives are
               | impossible. "Capitalist realism"
               | 
               | People who organize tend to be the people who are most
               | optimistic about change. This is for a reason.
        
               | harikb wrote:
               | It may be possible for you (I am assuming you are > 20,
               | mature adult). But the context is around teens in the
               | prime of their learning. It is too hard to keep
               | ChatGPT/Claude away from them. Social media is too
               | addictive. Those TikTok/Reels/Shorts are addictive and
               | never ending. We are doomed imho.
               | 
               | If education (schools) were to adopt a teaching-AI (one
               | that will given them the solution, but at least ask a
               | bunch of questions ), may be there is some hope.
        
               | jplusequalt wrote:
               | >We are doomed imho.
               | 
               | I encourage you to take action to prove to yourself that
               | real change is possible.
               | 
               | What you can do in your own life to enact change is hard
               | to say, given I know nothing about your situation. But
               | say you are a parent, you have control over how often
               | your children use their phones, whether they are on
               | social media, whether they are using ChatGPT to get
               | around doing their homework. How we raise the next
               | generation of children will play an important role in how
               | prepared they are to deal with the consequences of the
               | actions we're currently making.
               | 
               | As a worker you can try to organize to form a union. At
               | the very least you can join an organization like the
               | Democratic Socialists of America. Your ability to
               | organize is your greatest strength.
        
               | palmotea wrote:
               | > The (as yet unproven) argument for the use of AIs is
               | that using AI to solve simpler problems allows us humans
               | to focus on the big picture, in the same way that letting
               | a calculator solve arithmetic gives us flexibility to
               | understand the math behind the arithmetic.
               | 
               | And I can tell you _from experience_ that  "letting a
               | calculator solve arithmetic" (or more accurately, being
               | dependent on a calculator to solve arithmetic) means you
               | cripple your ability to learn and understand more
               | advanced stuff. _At best_ your decision turned you into
               | the equivalent of a computer trying to run a 1GB binary
               | with 8MB of RAM and _a lot_ of paging.
               | 
               | > No one knows if that's true. We're running a grand
               | experiment: the next generation will either surpass us in
               | grand fashion using tools that we couldn't imagine, or
               | will collapse into a puddle of ignorant consumerism, a la
               | Wall-E.
               | 
               | It's the latter. Though I suspect the masses will be
               | shoved into the garbage disposal than be allowed to
               | wallow in ignorant consumerism. Only the elite that owns
               | the means of production will be allowed to indulge.
        
               | whilenot-dev wrote:
               | I think the latest GenAI/LLM bubble shows that tech (this
               | _hype_ kind of tech) doesn 't want us to learn, to think
               | or reason. It doesn't want to be seen as a mere tool
               | anymore, it wants to drive under the appearance that it
               | can reason on its own. We're in the process where tech
               | just wants us to adapt to it.
        
               | tristor wrote:
               | > I don't know how to grow crops, build a house, tend
               | livestock, make clothes, weld metal, build a car, build a
               | toaster, design a transistor, make an ASIC, or write an
               | OS.
               | 
               | Why not? I mean that, quite literally.
               | 
               | I don't know how to make an ASIC, and if I tried to write
               | an OS I'd probably fail miserably many times along the
               | way but might be able to muddle through to something very
               | basic. The rest of that list is certainly within my
               | wheelhouse even though I've never done any of those
               | things professionally.
               | 
               | The peer commenter shared the Heinlein quote, but there's
               | really something to be said for /society/ of being
               | peopled by well-rounded individuals that are able to
               | competently turn themselves to many types of tasks.
               | Specialization can also be valuable, but specialization
               | in your career should not prevent you from gaining a
               | breadth of skills outside of the workplace.
               | 
               | I don't know how to do any of the things in your list
               | (including building a web site) as an /expert/, but it
               | should not be out of the realm of possibility or even
               | expectation that people should learn these things at the
               | level of a competent amateur. I have grown a garden, I
               | have worked on a farm for a brief time, I've helped build
               | houses (Habitat for Humanity), I've taken a hobbyist
               | welding class and made some garish metal sculptures, I've
               | built a race car and raced it, and I've never built a
               | toaster but I have repaired one (they're actually very
               | electrically and mechanically simple devices). Besides
               | the disposable income to build a race car, nothing on
               | that list stands out to me as unachievable by anyone who
               | chooses to do so.
        
               | hackyhacky wrote:
               | > The peer commenter shared the Heinlein quote, but
               | there's really something to be said for /society/ of
               | being peopled by well-rounded individuals that are able
               | to competently turn themselves to many types of tasks
               | 
               | Being a well-rounded individualist is a great, but that's
               | an orthogonal issue to the question of outsourcing our
               | skills to machinery. When you were growing crops, did you
               | till the land by hand or did you use a tractor? When you
               | were making clothes did you sew by hand or use a sewing
               | machine? Who made your sewing needles?
               | 
               | The (dubious) argument for AI is that using LLMs to write
               | code is the same as using modern construction equipment
               | to build a house: you get the same result for less
               | effort.
        
               | mechagodzilla wrote:
               | I've done all of those except tend livestock and build a
               | house, but I could probably figure those out with some
               | effort.
        
               | kenjackson wrote:
               | > I don't know how to grow crops, build a house, tend
               | livestock, make clothes, weld metal, build a car, build a
               | toaster, design a transistor, make an ASIC, or write an
               | OS. I do know how to write a web site. But if I cede that
               | skill to an automated process, then that is the feather
               | that will break the camel's back?
               | 
               | Reminds me of the Nate Bargatz set where he talks about
               | how if he was a time traveler to the past that he
               | wouldn't be able to prove it to anyone. The skills most
               | of us have require this supply chain and then we apply it
               | at the very end. I'm not sure anyone in 1920 cares about
               | my binary analysis skills.
        
               | c22 wrote:
               | Specialization is over-rated. I've done everything in
               | your list except _make an ASIC_ because learning how to
               | do those things was interesting and I prefer when things
               | are done my way.
               | 
               | I started way back in my 20s _just_ figuring out how to
               | write websites. I 'm not sure where the camel's back
               | would have broken.
               | 
               | It has, of course, been convenient to be able to
               | "bootstrap" my self-reliance in these and other fields by
               | consuming goods produced by others, but there is no
               | mechanical reason that said goods should be provided by
               | specialists rather than advanced generalists beyond our
               | irrational social need for maximum acceleration.
        
               | jplusequalt wrote:
               | >50 years from new the career of a "coder" will evoke the
               | same historical quaintness as does "switchboard operator"
               | or "wainwright."
               | 
               | And what happens to those coders? For that matter--what
               | happens to all the other jobs at risk of being replaced
               | by AI? Where are all the high paying jobs these
               | disenfranchised laborers will flock to when their
               | previous careers are made obsolete?
               | 
               | We live in a highly specialized society that requires
               | people take out large loans to learn the skills necessary
               | for their careers. You take away their ability to provide
               | their labor, and it now seriously threatens millions of
               | workers from obtaining the same quality of life they once
               | had.
               | 
               | I seriously oppose such a future, and if that makes me a
               | Luddite, so be it.
        
               | hackyhacky wrote:
               | > And what happens to those coders? For that matter--what
               | happens to all the other jobs at risk of being replaced
               | by AI?
               | 
               | Some will manage to remain in their field, most won't.
               | 
               | > Where are all the high paying jobs these
               | disenfranchised laborers will flock to when their
               | previous careers are made obsolete?
               | 
               | They don't exist. Instead they'll take low-paying jobs
               | that can't (yet) be automated. Maybe they'll work in
               | factories [1].
               | 
               | > I seriously oppose such a future, and if that makes me
               | a Luddite, so be it.
               | 
               | Like I said, the Luddites were right, in the short term.
               | In the long term, we don't know. Maybe we'll live in a
               | post-scarcity Star Trek world where human labor has been
               | completely devalued, or maybe we'll revert to a feudal
               | society of property owners and indentured servants.
               | 
               | [1] https://www.newsweek.com/bessent-fired-federal-
               | workers-manuf...
        
               | jplusequalt wrote:
               | >They don't exist. Instead they'll take low-paying jobs
               | that can't (yet) be automated. Maybe they'll work in
               | factories
               | 
               | >or maybe we'll revert to a feudal society of property
               | owners and indentured servants.
               | 
               | We as the workers in society have the power to see that
               | this doesn't happen. We just need to organize. Unionize.
               | Boycott. Organize with people in your community to spread
               | worker solidarity.
        
               | DrillShopper wrote:
               | There is no industry that I have worked in that fights
               | against creating or joining unions tooth, claw, and nail
               | quite like software engineers.
        
               | jplusequalt wrote:
               | I think more and more workers are warming up to unions.
               | As wages in software continue to be oppressed, I think
               | we'll see an increase in unionization efforts for
               | software engineers.
        
               | dTal wrote:
               | I dunno, the "tool" that LLMs "replace" is _thinking
               | itself_. That seems qualitatively different than anything
               | that has come before. It 's the "tool" that underlies
               | _all_ the others.
        
               | whilenot-dev wrote:
               | > I don't know how to grow crops, build a house, tend
               | livestock, make clothes, weld metal, build a car, build a
               | toaster, design a transistor, make an ASIC, or write an
               | OS. I do know how to write a web site. But if I cede that
               | skill to an automated process, then that is the feather
               | that will break the camel's back?
               | 
               | All the things you mention have a certain objective
               | quality that can be reduced to an approachable minimum. A
               | house could be a simple cabin, a tent, a cave; a piece of
               | cloth could just be a cape; metal can be screwed, glued
               | or cast; a transistor could be a relay or a wooden
               | mechanism etc. ...history tells us all that.
               | 
               | I think when there's a _Homo ludens_ that wants to play,
               | or when there 's a _Homo economicus_ that wants us to
               | optimize, there might be one that separates the process
               | of _learning_ from _adaptation_ ( _Homo
               | investigans_?)[0]. The process of learning something new
               | could be such a subjective property that keeps a yet
               | unknown natural threshold which can 't be lowered (or
               | "reduced") any further. If I were to be overly
               | pessimistic, a hardcore luddite, I'd say that this
               | species is under attack, and there will be a generation
               | that lacks this aspect, but also won't miss it, because
               | this character could have never been experienced in the
               | first place.
               | 
               | [0]: https://en.wikipedia.org/wiki/Names_for_the_human_sp
               | ecies#Li...
        
               | harrall wrote:
               | I don't think specialization is a bad thing but the
               | friends I know that only know their subject seem to...
               | how do I put this... struggle at life and everything a
               | lot more.
               | 
               | And even at work, the coworkers that don't have a lot of
               | general knowledge seem to work a lot harder and get less
               | done because it takes them so much longer to figure
               | things out.
               | 
               | So I don't know... is avoiding the work of learning worth
               | it to struggle at life more?
        
             | bko wrote:
             | I don't know, most of the things I'm reliant on, from my
             | phone, ISP, automobile, etc are built on fragile
             | interdependent supply chains provided by for-profit
             | companies. If you're really worried about this, you should
             | learn survival skills not the academic topics I'm talking
             | about.
             | 
             | So if you're not bothering to learn how to farm, dress some
             | wild game, etc, chances are this argument won't be
             | convincing for "why should I learn calculus"
        
             | Zambyte wrote:
             | For what it's worth, locally runnable language models are
             | becoming exceptionally capable these days, so if you assume
             | you will have some computer to do computing, it seems
             | reasonable to assume that it will enable you to do some
             | language model based things. I have a server with a single
             | GPU running language models that easily blow GPT 3.5 out of
             | the water. At that point, I am offloading reasoning tasks
             | to my computer in the same way that I offload memory take
             | to my computer through my note taking habits.
        
           | whatshisface wrote:
           | I don't think memorizing poetry fits your picture. Nobody
           | ever memorized poetry so that they could answer questions
           | about it.
        
             | bko wrote:
             | A large part was to preserve cultural knowledge, which is
             | kind of like answering questions about it. What wisdom or
             | knowledge does this entail. People do the same with
             | religious texts today
             | 
             | The other part I imagine was largely entertainment, social
             | and memory is a good skill to build.
        
             | KronisLV wrote:
             | It doesn't seem that different from having to write a book
             | report or something like that. Back in school, we also
             | needed to memorize poems and songs to recite them - I quite
             | hated it because my memory was never exactly great. Same as
             | having to remember the vocabulary in a foreign language
             | when learning it, though that might arguably be a bit more
             | directly useful.
        
           | AlexandrB wrote:
           | > For instance, I'm not too concerned about my child's
           | ability to write very legibly (most writing is done on
           | computers), spell very well (spell check keeps us
           | professional), reading a map to get around (GPS), etc
           | 
           | What I don't like are all the hidden variables in these
           | systems. Even GPS, for example, is making some assumptions
           | about what kind of roads you want to take and how to weigh
           | different paths. LLMs are worse in this regard because the
           | creators encode a set of moral and stylistic
           | assumptions/dictates into the model and everybody who uses it
           | is nudged into that paradigm. This is destructive to any kind
           | of original thought, especially in an environment where there
           | are only a handful of large companies providing the models
           | everyone uses.
        
           | Retric wrote:
           | The scope of what's useful to know changes with tools, but
           | having a bullshit detector requires actually knowing some
           | things and being able to reason about the basics.
           | 
           | It's not that LLM's are particularly different it's that
           | people are less able to determine when they are messing up. A
           | search engine fails and you notice, an LLM fails and your
           | boss, customer, ect notices.
        
           | andai wrote:
           | >Why should I learn to do X if I can just ask an LLM and it
           | will do it better than me
           | 
           | This may eventually apply to all human labor.
           | 
           | I was thinking, even if they pass laws to mandate companies
           | employ a certain fraction of human workers... it'll be like
           | it already is now: they just let AI do most of the work
           | anyway!
        
           | riohumanbean wrote:
           | Why have children learn to walk? They're better off learning
           | the newest technology of hoverboards and not getting left
           | behind!
        
           | CivBase wrote:
           | Why should you learn how to add when you can just use a
           | calculator? We've had calculators for decades!
           | 
           | Because understanding how addition works is instrumental to
           | understanding more advanced math concepts. And being able to
           | perform simple addition quickly, without a calculator is a
           | huge productivity boost for many tasks.
           | 
           | In the world of education and intellectual development it's
           | not about getting the right answer as quickly as possible.
           | It's about mastering simple things so that you can understand
           | complicated things. And often times mastering a simple thing
           | requires you to manually do things which technology could
           | automate.
        
           | light_hue_1 wrote:
           | > For instance, I'm not too concerned about my child's
           | ability to write very legibly (most writing is done on
           | computers), spell very well (spell check keeps us
           | professional), reading a map to get around (GPS), etc.
           | 
           | I'm the polar opposite. And I'm AI researcher.
           | 
           | The reason you can't answer your kid when he asks about LLMs
           | is because the original position was wrong.
           | 
           | Being able to write isn't optional. It's a critical tool for
           | thought. Spelling is very important because you need to avoid
           | confusion. If you can't spell no spell checker can save you
           | when it inserts the wrong word. And this only gets far worse
           | the more technical the language is. And maps are crucial too.
           | Sometimes, the best way to communicate is to draw a map. In
           | many domains like aviation maps are everything, you literally
           | cannot progress without them.
           | 
           | LLMs are no different. They can do a little bit of thinking
           | for us and help us along the way. But we need to understand
           | what's going on to ask the right questions and to understand
           | their answers.
        
           | jplusequalt wrote:
           | >children will have a different perspective
           | 
           | Children will lack the critical thinking for solving complex
           | problems, and even worse, won't have the work ethic for
           | dealing with the kinds of protracted problems that occur in
           | the real world.
           | 
           | But maybe that's by design. I think the ownership class has
           | decided productivity is more important than societal malaise.
        
           | bcrosby95 wrote:
           | > So how do I answer my child when he asks "Why should I
           | learn to do X if I can just ask an LLM and it will do it
           | better than me"
           | 
           | It's been my experience that LLMs are only better than me at
           | stuff I'm bad at. It's noticeably worse than me at things I'm
           | good at. So the answer to your question depends: can your
           | child get good at things while leaning on an LLM?
           | 
           | I don't know the answer to this. Maybe schools need to expect
           | more from their students with LLMs in the picture.
        
             | gnatolf wrote:
             | Given the rate of improvement wrt to llms, this may not
             | hold true for long
        
           | johndough wrote:
           | Use it or lose it. With the invention of the calculator,
           | students lost the ability to do arithmetic. Now, with LLMs,
           | they lose the ability to think.
           | 
           | This is not conjecture by the way. As a TA, I have observed
           | that half of the undergraduate students lost the ability to
           | write any code at all without the assistance of LLMs. Almost
           | all use ChatGPT for most exercises.
           | 
           | Thankfully, cheating technology is advancing at a similarly
           | rapid pace. Glasses with integrated cameras, WiFi and heads-
           | up display, smartwatches with polarized displays that are
           | only readable with corresponding glasses, and invisibly small
           | wireless ear-canal earpieces to name just a few pieces of
           | tech that we could have only dreamed about back then. In the
           | end, the students stay dumb, but the graduation rate barely
           | suffers.
           | 
           | I wonder whether pre-2022 degrees will become the academic
           | equivalent to low-background radiation steel:
           | https://en.wikipedia.org/wiki/Low-background_steel
        
           | quantumHazer wrote:
           | Universities still teach you calculus and real analysis even
           | though Wolfram Alpha exists. It boils down to your willing to
           | learn something. An LLM can't understand things for you. I'm
           | "early genz" and I write code without llm because I find data
           | structure and algorithm very interesting and I want to learn
           | the concepts not because I'm in love with the syntax of C or
           | Rust (I love the syntax of C btw).
        
           | an_aparallel wrote:
           | This is how we end up with people who cant write legibly,
           | cant smell bad maths (on the news/articles/ads), cant change
           | tires, have no orienteering or sense of direction and
           | memories like swiss cheese. Trust the oracle son. /s
           | 
           | I think all of the above do one thing brilliantly, built self
           | confidence.
           | 
           | Its easy to get bullshitted if what youre able to hold in
           | your head is effectively nothing.
        
           | palmotea wrote:
           | > When modern search became more available, a lot of people
           | said there's no point of rote memorization as you can just do
           | a Google search. That's more or less accepted today.
           | 
           | And those people are wrong, in a similar way to how it's
           | wrong to say: "There's no point in having very much RAM, as
           | you can just page to disk."
           | 
           | It's the cognitive equivalent of becoming morbidly obese
           | (another popular decision in today's world).
        
           | OptionOfT wrote:
           | The problem with GPS is that you never learn to orient
           | yourself. You don't learn to have a sense of place, direction
           | or elapsed distance. [0]
           | 
           | As to writing, just the action of writing something down with
           | a pen, on paper, has been proven to be better for
           | memorization than recording it on a computer [1].
           | 
           | If we're not teaching these basic skills because an LLM does
           | it better, how do learn to be skeptical of the output of the
           | LLM. How do we validate it?
           | 
           | How do we bolster ourselves against corporate influences when
           | asking which of 2 products is healthier? How do we spot
           | native advertising? [2]
           | 
           | [0]: https://www.nature.com/articles/531573a
           | 
           | [1]: https://www.sciencedirect.com/science/article/abs/pii/S0
           | 0016...
           | 
           | [2]: Example: https://www.nytimes.com/paidpost/netflix/women-
           | inmates-separ...
        
         | taftster wrote:
         | I don't think asking "what's wrong with my code" hurts the
         | learning process. In fact, I would argue it helps it. I don't
         | think you learn when you have reached your frustration point
         | and you just want the dang assignment completed. But before
         | reaching that point, if you had a tutor or assistant that you
         | could ask, "hey, I'm just not seeing my mistake, do you have
         | ideas" goes a long way to foster learning. ChatGPT, used in
         | this way, can be extremely valuable and can definitely unlock
         | learning in new ways which we probably even haven't seen yet.
         | 
         | That being said, I agree with you, if you just ask ChatGPT to
         | write a b-tree implementation from scratch, then you have not
         | learned anything. So like all things in academia, AI can be
         | used to foster education or cheat around it. There's been
         | examples of these "cheats" far before ChatGPT or Google
         | existed.
        
           | SoftTalker wrote:
           | No I think the struggle is essential. If you can just ask a
           | tutor (real or electronic) what is wrong with your code, you
           | stop thinking and become dependent on that. Learning to think
           | your way through a roadblock that seems like a showstopper is
           | huge.
           | 
           | It's sort of the mental analog of weight training. The only
           | way to get better at weightlifting is to actually lift
           | weight.
        
             | taftster wrote:
             | If I were to go and try to bench 300lbs, I would absolutely
             | need a spotter to rescue me. Taking on more weight than I
             | can possibly achieve is a setup for failure.
             | 
             | Sure, I should probably practice benching 150lbs. That
             | would be a good challenge for me and I would benefit from
             | that experience. But 300lbs would crush me.
        
         | Maskawanian wrote:
         | Agreed, the only thing that is certain is that they are
         | cheating themselves.
         | 
         | While it can be useful to use LLMs as a tutor if you're stuck.
         | The moment that you use it to provide a solution, you stop
         | learning and the tool becomes a required stepping stone.
        
         | hobo_in_library wrote:
         | The challenge is that while LLMs do not know everything, they
         | are likely to know everything that's needed for your
         | undergraduate education.
         | 
         | So if you use them at that level you may learn the concepts at
         | hand, but you won't learn _how to struggle_ to come up with
         | novel answers. Then later in life when you actually hit problem
         | domains that the LLM wasn't trained in, you'll not have learned
         | the thinking patterns needed to persist and solve those
         | problems.
         | 
         | Is that necessarily a bad thing? It's mixed: - You lower the
         | bar for entry for a certain class of roles, making labor
         | cheaper and problems easier to solve at that level. - For more
         | senior roles that are intrinsically solving problems without
         | answers written in a book or a blog post somewhere, you need to
         | be selective about how you evaluate the people who are ready to
         | take on that role.
         | 
         | It's like taking the college weed out classes and shifting
         | those to people in the middle of their career.
         | 
         | Individuals who can't make the cut will find themselves
         | stagnating in their roles (but it'll also be easier for them to
         | switch fields). Those who can meet the bar might struggle but
         | can do well.
         | 
         | Business will also have to come up with better ways to evaluate
         | candidates. A resume that says "Graduated with a degree in X"
         | will provide less of a signal than it did in the past
        
         | psygn89 wrote:
         | Agreed, the struggle often leads us to poke and prod an issue
         | from many angles until things finally click. It lets us think
         | critically. In that journey you might've learned other related
         | concepts which further solidifies your understanding.
         | 
         | But when the answer flows out of thin air right in front of you
         | with AI, you get the "oh duh" or "that makes sense" moments and
         | not the "a-ha" moment that ultimately sticks with you.
         | 
         | Now does everything need an "a-ha" moment? No.
         | 
         | However, I think core concepts and fundamentals need those
         | "a-ha" moments to build a solid and in-depth foundation of
         | understanding to build upon.
        
           | taftster wrote:
           | Absolutely this. AI can help reveal solutions that weren't
           | seen. An a-ha moment can be as instrumental to learning as
           | the struggle that came before.
           | 
           | Academia needs to embrace this concept and not try to fight
           | it. AI is here, it's real, it's going to be used. Let's teach
           | our students how to benefit from its (ethical) use.
        
         | dyauspitr wrote:
         | I'm pretty sure you can assume close to 100% of students are
         | using LLMs to do their homework.
        
           | ryandrake wrote:
           | And if you're that one person out of 100,000 who is _not_
           | using LLMs to do their homework, you are at a significant
           | disadvantage on the grading curve.
        
             | DontchaKnowit wrote:
             | Maybe, but piss on that, who needs good grades? Youll learn
             | a hell of a lot better
        
         | andai wrote:
         | I spent much of the past year at public libraries, and I heard
         | the word ChatGPT approximately once per minute, in surround
         | sound. Always from young people, and usually in a hushed
         | tone...
        
         | moltar wrote:
         | I think it's finally time to just stop the homework.
         | 
         | All school work must be done within the walls of the school.
         | 
         | What are we teaching our children? It's ok to do more work at
         | home?
         | 
         | There are countries that have no homework and they do just
         | fine.
        
           | jplusequalt wrote:
           | Homework helps reinforce the material learned in class. It's
           | already a problem where there is too much material to be fit
           | into a single class period. Trying to cram in enough time for
           | homework will only make that problem worse.
        
             | moltar wrote:
             | Can do the work the next day to reinforce.
             | 
             | As I said there are countries without homework and they
             | seem to do ok. So it's not mandatory by any means.
        
               | jplusequalt wrote:
               | >Can do the work the next day to reinforce.
               | 
               | Keeping the curriculum fixed, there's already barely
               | enough time to cover everything. Cutting the amount of
               | lectures in half to make room for in-class homework time
               | does not fix this fundamental problem.
        
               | DontchaKnowit wrote:
               | Just make lecture times longer.
        
         | srveale wrote:
         | IMO it's so easy to ChatGPT your homework that the whole
         | education model needs to flip on its head. Some teachers
         | already do something like this, it's called the "Flipped
         | classroom" approach.
         | 
         | Basically, a student's marks depend mostly (only?) on what they
         | can do in a setting where AI is verifiably unavailable. It
         | means less class time for instruction, but students have a
         | tutor in their pocket anyway.
         | 
         | I've also talked with a bunch of teachers and a couple admins
         | about this. They agree it's a huge problem. By the same token,
         | they are using AI to create their lesson plans and assignments!
         | Not fully of course, they edit the output using their
         | expertise. But it's funny to imagine AI completing an AI
         | assignment with the humans just along for the ride.
         | 
         | The point is, if you actually want to know what a student is
         | capable of, you need to watch them doing it. Assigning homework
         | has lost all meaning.
        
           | hackyhacky wrote:
           | > it's called the "Flipped classroom" approach.
           | 
           | Flipped classroom is just having the students give lectures,
           | instead of the teacher.
           | 
           | > Basically, a student's marks depend mostly (only?) on what
           | they can do in a setting where AI is verifiably unavailable.
           | 
           | This is called "proctored exams" and it's been pretty common
           | in universities for a few centuries.
           | 
           | None of this addresses the real issue, which is whether
           | teachers _should_ be preventing students from using AIs.
        
             | srveale wrote:
             | > Flipped classroom is just having the students give
             | lectures, instead of the teacher.
             | 
             | Not quite. Flipped classroom means more instruction outside
             | of class time and less homework.
             | 
             | > This is called "proctored exams" and it's been pretty
             | common in universities for a few centuries. None of this
             | addresses the real issue
             | 
             | Proctored exams is part of it. In-class assignments is
             | another. Asynchronous instruction is another.
             | 
             | And yes, it addresses the issue. Students can use AI
             | however they see fit, to learn or to accomplish tasks or
             | whatever, but for actual assessment of ability they cannot
             | use AI. And it leaves the door open for "open-book" exams
             | where the use of AI is allowed, just like a calculator and
             | textbook/cheat-sheet is allowed for some exams.
             | 
             | https://en.wikipedia.org/wiki/Flipped_classroom
        
             | bryanlarsen wrote:
             | Flipped classroom means you watch the recorded lecture
             | outside of class time and you do your homework during class
             | time.
        
               | Spivak wrote:
               | Thank you, it's amazing how people don't even try to
               | understand what words mean before dismissing it. Flipped
               | makes way more sense anyway since lectures aren't
               | terribly interactive. Being able to pause/replay/skip
               | around in lectures is underrated.
        
           | vonneumannstan wrote:
           | >Not fully of course, they edit the output using their
           | expertise
           | 
           | Surely this is sarcasm, but really your average schoolteacher
           | is now a C student Education Major.
        
             | srveale wrote:
             | I was talking about people I know and talk with, mostly
             | friends and family, who are smart, hard working, and their
             | students are lucky to have them.
        
         | chalst wrote:
         | Students who do that risk submitting assignments that show they
         | don't understand the course so far.
        
         | 0xffff2 wrote:
         | >For many students, it's literally "let me paste the assignment
         | into ChatGPT and see what it spits out, change a few words and
         | submit that".
         | 
         | Does that actually work? I'm long past having easy access to
         | college programming assignments, but based on my limited
         | interaction with ChatGPT I would be absolutely shocked if it
         | produced output that was even coherent, much less working code
         | given such an approach.
        
           | StefanBatory wrote:
           | I have some subjects, at Masters - that are solvable by one
           | prompt. One.
           | 
           | Quality of CS/Software Engineering programs vary that much.
        
           | bongodongobob wrote:
           | Why are you asking? Go try it. And yes, depending on the
           | task, it does.
        
             | 0xffff2 wrote:
             | As I said, I'm not a student, so I don't have access to a
             | homework assignment to paste in. Ironically I have pretty
             | much everything I ever submitted for my undergrad, but it
             | seems like I absolutely never archived the assignments for
             | some reason.
        
           | rufus_foreman wrote:
           | >> Does that actually work?
           | 
           | Sure. Works in my IDE. "Create a linked list implementation,
           | use that implementation in a method to reverse a linked list
           | and write example code to demonstrate usage".
           | 
           | Working code in a few seconds.
           | 
           | I'm very glad I didn't have access to anything like that when
           | I was doing my CS degree.
        
         | ryandrake wrote:
         | I think teachers also need to reconsider how they are measuring
         | mastery in the subject. LLMs exist. There is no putting the cat
         | back into the bag. If your 1980s way to measure a student's
         | mastery of a subject can be fooled by an LLM, then how
         | effective is that measurement in 2020+? Maybe we need to stop
         | using essays as a way to tell if the student has learned the
         | material.
         | 
         | Don't ask me what the solution is. Maybe your product does it.
         | If I knew, I'd be making a fortune selling it to universities.
        
         | teekert wrote:
         | Students do something akin to vibe coding I guess. It may seem
         | impressive at first glance but if anything breaks you are so,
         | so lost. Maybe that's it, break the student's code, see how
         | they fix it. The vibe coding student is easily separate from
         | the real one (of course this real coder can also use AI, just
         | not yoloing it).
         | 
         | I guess you can apply similar mechanics to reports. Some deeper
         | questions and you will know if the report was self written or
         | if an AI did it.
        
         | bboygravity wrote:
         | I don't get this reasoning. Without LLMs I would learn how to
         | write sub-optimal code that is somewhat functional. With LLMs
         | instantly see "how it's done" for my exact problem case which
         | makes me learn way faster. On top of that it always makes dumb
         | mistakes which forces you to actually understand what it's
         | spitting out to get it to work properly. Again: that helps with
         | learning.
         | 
         | The fact that you can ask it for a solution for exactly the
         | context you're interested in is amazing and traditional
         | learning doesn't come close in terms of efficiency IMO.
        
           | dingnuts wrote:
           | > With LLMs instantly see "how it's done" for my exact
           | problem case which makes me learn way faster.
           | 
           | No, you see a plausible set of tokens that appear similar to
           | how it's done, and as a beginner, you're not able to tell the
           | difference between a good example and something that is
           | subtly wrong.
           | 
           | So you learn something, but it's wrong. You internalize it.
           | Later, it comes back to bite you. But OpenAI keeps the money
           | for the tokens. You pay whether the LLM is right or not. Sam
           | likes that.
        
             | Spivak wrote:
             | This makes for a good sound bite but it's just not true.
             | The use case of "show me what is a customary solution to
             | <problem>" plays exactly into LLMs strength as a funny kind
             | of search engine. I used to (and still do) search public
             | code for this use case to get a sense of the style and
             | idioms common in a new language/library and the plausible
             | set of tokens is doing exactly that.
        
         | victorbjorklund wrote:
         | In one way I'm glad I learned to code before LLM:s. It would be
         | so hard to push through the learning now when you are just a
         | click away from buildning the app with AI...
        
       | dmurray wrote:
       | I am surprised that business students are relatively low
       | adopters: LLMs seem perfect for helping with presentations, etc,
       | and business students are stereotypically practical-minded rather
       | than motivated by love of the subject.
       | 
       | Perhaps Claude is disproportionately _marketed_ to the STEM
       | crowd, and the business students are doing the same stuff using
       | ChatGPT.
        
       | jimbob45 wrote:
       | It says STEM undergrad students are the primary beneficiaries of
       | LLMs but Wolfram Alpha was already able to do the lion's share of
       | most undergrad STEM homework 15 years ago.
        
       | brunocroh wrote:
       | I simply don't waste my time reading an AD as an article.
       | 
       | I take this as seriously as I would if McDonald's published
       | articles about how much weight people lose eating at McDonald's.
        
         | ikesau wrote:
         | It's more like an analysis of what items people order from
         | McDonald's, using McDonald's own data which is otherwise very
         | difficult to collect.
         | 
         | Your loss!
        
           | brunocroh wrote:
           | Yes, maybe, but there is a lot of noise and conflicts of
           | interest.
        
           | AlexandrB wrote:
           | This is why I go to cigarette companies for analysis of the
           | impact of smoking on users. They have the most data!
        
         | lblume wrote:
         | If you had read the article, you would have been able to see
         | that the conclusions don't really align with any economic goals
         | Anthropic might have.
        
           | AlexandrB wrote:
           | I think the point is that the situation is probably worse
           | than what Anthropic is presenting here. So if the conclusions
           | are just damaging, the reality must be truly damning.
        
           | defgeneric wrote:
           | To have the reputation as an AI company that really cares
           | about education and the responsible integration of AI into
           | education is a pretty valuable goal. They are now ahead of
           | OpenAI in this respect.
           | 
           | The problem is that there's a conflict of interest here. The
           | extreme case proves it--leaving aside the feasibility of it,
           | what if the only solution is a total ban on AI usage in
           | education? Anthropic could never sanction that.
        
         | juped wrote:
         | I'm curious if you're willing to say what you (and potentially
         | other people who spell 'AD' like that) think it's an acronym
         | for, by the way.
        
       | karpour wrote:
       | My take: While AI tools can help with learning, the vast majority
       | of students use it to _avoid_ learning
        
         | hervature wrote:
         | This has been observation about the internet. Growing up in a
         | small town without access to advanced classes, having access to
         | Wikipedia felt like the greatest equalizer in the world. 20
         | years post internet, seeing the most common outcome be that
         | people learn less as a result of unlimited access to
         | information would be depressing if it did not result in my own
         | personal gain.
        
           | karpour wrote:
           | I would say a big difference of the Internet around 2000 and
           | the internet now is that most people shared information in
           | good faith back then, which is not the case anymore. Maybe
           | back then people were just as uncritical of information, but
           | now we really see the impact of people being not critical.
        
         | nthingtohide wrote:
         | My take : AI is the REPL interface for learning activities. All
         | the points which Salman Khan talked about apply here.
        
         | janalsncm wrote:
         | I agree with you, but I hope schools also take the opportunity
         | to reflect on what they teach and how. I used to think I hated
         | writing, but it turns out I just hated English class. (I got a
         | STEM degree because I hated English class so much, so maybe I
         | have my high school English teacher to thank for it.)
         | 
         | Torturing students with five paragraph essays, which is what
         | "learning" looks like for most American kids, is not that great
         | and isn't actually teaching critical thinking which is most
         | valuable. I don't know any other form of writing that is like
         | that.
         | 
         | Reading "themes" into books that your teacher is convinced are
         | there. Looking for 3 quotes to support your thesis (which must
         | come in the intro paragraph, but not before the "hook" which
         | must be exciting and grab the reader's attention!).
        
       | ozarkerD wrote:
       | I loved asking questions as a kid. To the point of annoying
       | adults. I would have loved to sit and ask these AI questions
       | about all kinds of interests when I was young.
        
         | qwertox wrote:
         | I'm pretty sure that kids at the age of 4 would get an amazing
         | intelligence boost compared to their peers later when they are
         | around 8 years old.
         | 
         | They will clearly recognize other kids which did not have an AI
         | to talk with at that stage when curiosity really blossoms.
        
         | walthamstow wrote:
         | I think it's likely that everyone here was, or even is, that
         | kid and that's why we're here on this website today
        
       | PeterStuer wrote:
       | I feel CS students, and to a lesser degree STEM in general, will
       | always be more early adopters of advancements in computer
       | technology.
       | 
       | They were the first to adopt digital wordprocessing,
       | presentations, printing and now generative AI even though in
       | essence all of these would have been disproportionately more hand
       | in glove for the humanities on a purely functional level.
       | 
       | It's just a matter of comfortability with and interest in
       | technology.
        
       | chenzo44 wrote:
       | professor here. i set up a website to host openwebui to use in my
       | b-school courses (UG and grad). the only way i've found to get
       | students to stop using it to cheat is to push them to use it
       | until they learn for themselves that it doesn't answer everything
       | correctly. this requires careful thoughtful assignment redesign.
       | everytime i grade a submission with the hallmarks of ai-
       | generation, i always find that it fails to cite content from the
       | course and shows a lack of depth. so, i give them the grade they
       | earn. so much hand wringing about using ai to cheat... just
       | uphold the standards. if they are so low that ai can easily game
       | them, that's on the instructor.
        
       | fudged71 wrote:
       | an interesting area potentially missed (though acknowledged as
       | out of scope) is how students might use LLMs for tasks related to
       | early adulthood development. Successfully navigating post-
       | secondary education involves more than academics; it requires
       | developing crucial life skills like resilience, independence,
       | social integration, and well-being management, all of which are
       | foundational to academic persistence and success. Understanding
       | if and how students leverage AI for these non-academic,
       | developmental challenges could offer a more holistic picture of
       | AI's role in student life and its indirect impact on their
       | educational journey
        
       | zebomon wrote:
       | The writing is irrelevant. Who cares if students don't learn how
       | to do it? Or if the magazines are all mostly generated a decade
       | from now? All of that labor spent on writing wasn't really making
       | economic sense.
       | 
       | The problem with that take is this: it was never _about_ the act
       | of writing. What we lose, if we cut humans out of the equation,
       | is writing as a proxy for what actually matters, which is
       | thinking.
       | 
       | You'll soon notice the downsides of not-thinking (at scale!) if
       | you have a generation of students who weren't taught to exercise
       | their thinking by writing.
       | 
       | I hope that more people come around to this way of seeing things.
       | It seems like a problem that will be much easier to mitigate than
       | to fix after the fact.
       | 
       | A little self-promo: I'm building a tool to help students and
       | writers create proof that they have written something the good ol
       | fashioned way. Check it out at https://itypedmypaper.com and let
       | me know what you think!
        
         | spongebobstoes wrote:
         | Writing is not necessary for thinking. You can learn to think
         | without writing. I've never had a brilliant thought while
         | writing.
         | 
         | In fact, I've done a lot more thinking and had a lot more
         | insights from talking than from writing.
         | 
         | Writing can be a useful tool to help with rigorous thinking. In
         | my opinion, is mostly about augmenting the author's effective
         | memory to be larger and more precise.
         | 
         | I'm sure the same effect could be achieved by having AI
         | transcribe a conversation.
        
           | Unearned5161 wrote:
           | I'm not settled on transcribed conversation being an adequate
           | substitute for writing, but maybe it's better than nothing.
           | 
           | There's something irreplaceable about the absoluteness of
           | words on paper and the decisions one has to do to write them
           | out. Conversational speak is, almost by definition, more
           | relaxed and casual. The bar is lower and as such, the bar for
           | thoughts is lower, in order of ease of handwaving I think it
           | goes: mental, speech, writing.
           | 
           | Furthermore there's the concept of editing which I'm unsure
           | how it could be carried out in a conversational sense in
           | graceful manner. Being able to revise words, delete, move
           | around, can't be done with conversation unless you count
           | "forget I said that, it's actually more like this..." as
           | suitable.
        
         | janalsncm wrote:
         | How does your product prevent a person from simply retyping
         | something that ChatGPT wrote?
         | 
         | I think the prevalence of these AI writing bots means schools
         | will have to start doing things that aren't scalable: in-class
         | discussions, in-person writing (with pen and paper or locked
         | down computers), way less weight given to remote assignments on
         | Canvas or other software. Attributing authorship from text
         | alone (or keystroke patterns) is not possible.
        
           | zebomon wrote:
           | It may be possible that with enough data from the two
           | categories (copied from ChatGPT and not), your keystroke
           | dynamics will differ. This is an open question that my co-
           | founder and I are running experiments on currently.
           | 
           | So, I would say that while I wouldn't fully dispute your
           | claim that attributing authorship from text alone is
           | impossible, it isn't yet totally clear one way or the other
           | (to us, at least -- would welcome any outside research).
           | 
           | Long-term -- and that's long-term in AI years ;) -- gaze
           | tracking and other biometric tracking will undoubtedly be
           | necessary. At some point in the near future, many people will
           | be wearing agents inside earbuds that are not obvious to the
           | people around them. That will add another layer of complexity
           | that we're aware of. Fundamentally, it's more about creating
           | evidence than creating proof.
           | 
           | We want to give writers and students the means to create
           | something more detailed than they would get from a chatbot
           | out-of-the-box, so that mimicking the whole act of writing
           | becomes more complicated.
        
             | pr337h4m wrote:
             | At this point, it would be easier to stick to in-person
             | assignments.
        
               | zebomon wrote:
               | It certainly would be! I think for many students though,
               | there's something lost there. I was a student who got a
               | lot more value out of my take-home work than I did out of
               | my in-class work. I don't think that I ever would have
               | taken the interest in writing that I did if it wasn't
               | such a solitary, meditative thing for me.
        
           | logicchains wrote:
           | >I think the prevalence of these AI writing bots means
           | schools will have to start doing things that aren't scalable
           | 
           | It won't be long 'til we're at the point that embodied AI can
           | be used for scalable face-to-face assessment that can't be
           | cheated any easier than a human assessor.
        
         | ketzu wrote:
         | > The writing is irrelevant.
         | 
         | In my opinion this is not true. Writing is a form of
         | communicating ideas. Structuring and communicating ideas with
         | others is really important, not just in written contexts, and
         | it needs to be trained.
         | 
         | Maybe the way universities do it is not great, but writing in
         | itself is important.
        
           | zebomon wrote:
           | Kindly read past the first line, friend :)
        
         | knowaveragejoe wrote:
         | Paul Graham had a recent blogpost about this, and I find it
         | hard to disagree with.
         | 
         | https://www.paulgraham.com/writes.html
        
         | karn97 wrote:
         | I literally never write while thinking lol stop projecting this
         | hard
        
       | technoabsurdist wrote:
       | I'm an undergrad at a T10 college. Walking through our library, I
       | often notice about 30% of students have ChatGPT or Claude open on
       | their screens.
       | 
       | In my circle, I can't name a single person who doesn't heavily
       | use these tools for assignments.
       | 
       | What's fascinating, though, is that the most cracked CS students
       | I know deliberately avoid using these tools for programming work.
       | They understand the value in the struggle of solving technical
       | problems themselves. Another interesting effect: many of these
       | same students admit they now have more time for programming and
       | learning they "care about" because they've automated their
       | humanities, social sciences, and other major requirements using
       | LLMs. They don't care enough about those non-major courses to
       | worry about the learning they're sacrificing.
        
       | proteal wrote:
       | I'm about to graduate from a top business school with my MBA and
       | it's been wild seeing AI evolve over the last 2 years.
       | 
       | GPT3 was pretty ass - yet some students would look you dead in
       | the eyes with that slop and claim it as their own. Fast forward
       | to last year when I complimented a student on his writing and he
       | had to stop me - "bro this is all just AI."
       | 
       | I've used AI to help build out frameworks for essays and suggest
       | possible topics and it's been quite helpful. I prefer to do the
       | writing myself because the AIs tend to take very bland positions.
       | The AIs are also great at helping me flesh out my writing. I ask
       | "does this make sense" and it tells me patiently where my writing
       | falls off the wagon.
       | 
       | AI is a game changer in a big way. Total paradigm shift. It can
       | now take you 90% of the way with 10% of the effort. Whether this
       | is good or bad is beyond my pay grade. What I can say is that if
       | you are not leveraging AI, you will fall behind those that are.
        
       | moojacob wrote:
       | How can I, as a student, avoid hindering my learning with
       | language models?
       | 
       | I use Claude, a lot. I'll upload the slides and ask questions.
       | I've talked to Claude for hours trying to break down a problem. I
       | think I'm learning more. But what I think might not be what's
       | happening.
       | 
       | In one of my machine learning classes, cheating is a huge issue.
       | People are using LMs to answer multiple choice questions on
       | quizzes that are on the computer. The professors somehow found
       | out students would close their laptops without submitting, go out
       | into the hallway, and use a LM on their phone to answer the
       | questions. I've been doing worse in the class and chalked it up
       | to it being grad level, but now I think it's the cheating.
       | 
       | I would never do cheat like that, but when I'm stuck and use
       | Claude for a hint on the HW am I loosing neurons? The other day I
       | used Claude to check my work on a graded HW question (breaking
       | down a binary packet) and it caught an error. I did it on my own
       | before and developed some intuition but would I have learned more
       | if I submitted that and felt the pain of losing points?
        
         | lunarboy wrote:
         | This sounds fine? Copy pasting LLM output without understanding
         | is a short term dopamine hit that only hurts you long term if
         | you don't understand it. If you struggle first, or
         | strategically ping-pong with the LLM to arrive at the answer,
         | and can ultimately understand the underlying reasoning.. why
         | not use it?
         | 
         | Of course the problem is the much lower barrier for that to
         | turn into cutting corners or full on cheating, but always
         | remember it ultimately hurts you the most long term.
        
         | azemetre wrote:
         | Can you do all this without relying on any LLM usage? If so
         | then you're fine.
        
         | knowaveragejoe wrote:
         | It's a hard question to answer and one I've been mindful of in
         | using LLMs as tutoring aids for my own learning purposes. Like
         | everything else around LLM usage, it probably comes down to
         | careful prompting... I really _don 't_ want the answer right
         | away. I want to propose my own thoughts and carefully break
         | them down with the LLM. Claude is pretty good at this.
         | 
         | "productive struggle" is essential, I think, and it's hard to
         | tease that out of models that are designed to be as immediately
         | helpful as possible.
        
         | dwaltrip wrote:
         | Only use LLMs for half of your work, at most. This will ensure
         | you continue to solidify your fundamentals. It will also
         | provide an ongoing reality check.
         | 
         | I'd also have sessions / days where I don't use AI at all.
         | 
         | Use it or lose it. Your brain, your ability to persevere
         | through hard problems, and so on.
        
         | quantumHazer wrote:
         | As a student, I use LLMs as little as possible and try to rely
         | on books whenever possible. I sometimes ask LLMs questions
         | about things that don't click, and I fact-check their
         | responses. For coding, I'm doing the same. I'm just raw dogging
         | the code like a caveman because I have no corporate deadlines,
         | and I can code whatever I want. Sometimes I get stuck on
         | something and ask an LLM for help, always using the web
         | interface rather than IDEs like Cursor or Windsurf.
         | Occasionally, I let the LLMs write some boilerplate for boring
         | things, but it's really rare and I tend not to use them too
         | much. This isn't due to Luddism but because I want to learn,
         | and I don't want slop in my way.
        
       | defgeneric wrote:
       | After reading the whole article I still came away with the
       | suspicion that this is a PR piece that is designed to head-off
       | strict controls on LLM usage in education. There is a fundamental
       | problem here beyond cheating (which is mentioned, to their
       | credit, albeit little discussed). Some academic topics are only
       | learned through sustained, even painful, sessions where attention
       | has to be fully devoted, where the feeling of being "stuck" has
       | to be endured, and where the brain is given space and time to do
       | the real work of synthesizing, abstracting, and learning, or, in
       | short, _thinking_. The prompt-chains where students are asking
       | "show your work" and "explain" can be interpreted as the kind of
       | back-and-forth that you'd hear between a student and a teacher,
       | but they could also just be evidence of higher forms of
       | "cheating". If students are not really working through the
       | exercises at the end of each chapter, but instead offloading the
       | task to an LLM, then we're going to have a serious competency
       | issue. Nobody ever actually learns anything.
       | 
       | Even in self-study, where the solutions are at the back of the
       | text, we've probably all had the temptation to give up and just
       | flip to the answer. Anthropic would be more responsible to admit
       | that the solution manual to every text ever made is now instantly
       | and freely available. This has to fundamentally change pedagogy.
       | No discipline is safe, not even those like music where you might
       | think the end performance is the main thing (imagine a promising,
       | even great, performer who cheats themselves in the education
       | process by offloading any difficult work in their music theory
       | class to an AI, coming away learning essentially nothing).
       | 
       | P.S. There is also the issue of grading on a curve in the current
       | "interim" period where this is all new. Assume a lazy professor,
       | or one refusing to adopt any new kind of teaching/grading method:
       | the "honest" students have no incentive to do it the hard way
       | when half the class is going to cheat.
        
       | juancroldan wrote:
       | With so much collaborative usage, I wonder how Claude group chats
       | are not already a feature
        
       | atoav wrote:
       | As someone teaching at the university level, the goals of
       | teaching are (in that order):
       | 
       | 1. Get people interested in my topics and removing fears and/or
       | preconceived notions about whether it is something for them or
       | not
       | 
       | 2. Teach students general principles and the ability to go deeper
       | themselves when and if it is needed
       | 
       | 3. Giving them the ability to apply the learned
       | principles/material in situations they encounter
       | 
       | I think removing fear and sparking interest is a precondition for
       | the other two. And if people are interested _they_ want to
       | understand it and then they use AI to answer questions they have
       | instead of blindly letting it do the work.
       | 
       | And even before AI you would have students who thought they did
       | themselves favours by going a learn-and-forget route or cheating.
       | AI jusr makes it a little easier to do just that. But in any
       | pressure situation, like a written assignment under supervision
       | it will come to light anyways, whether someone knows their shit
       | or not.
       | 
       | Now I have the luck that the topics I teach (electronics and
       | media technology) are very applied anyways, so AI does not have a
       | big impact as of now. Not being able to understand things isn't
       | really an option when you have to use a mixing desk in a venue
       | with a hundred people or when you have to set up a tripod without
       | wrecking the 6000EUR camera on top.
       | 
       | But I generally teach people who are in it for the interest and
       | not for some prestige that comes with having a BA/MA. I can
       | imagine this is quite different in other fields where people are
       | in it for the money or the prestige.
        
       | mceoin wrote:
       | I'm curious why people think business is so underrepresented as a
       | user group, especially since "analyzing" 30% of the Bloom
       | Taxonomy results. My dual theories are:
       | 
       | - LLMs are good enough to zero or few-shot most business
       | questions and assignments, so n.questions is low VS other tasks
       | like writing a codebase.
       | 
       | - Form factor (biased here); maybe threads-only aren't best for
       | business analysis?
        
       | bsoles wrote:
       | My BS detector went up to 11 as I was reading the article. Then I
       | realized that "Education Report" was written by Anthropic itself.
       | The article is a prime example of AI-washing.
       | 
       | > Students primarily use AI systems for creating...
       | 
       | > Direct conversations, where the user is looking to resolve
       | their query as quickly as possible
       | 
       | Aka cheating.
        
       | xcke wrote:
       | This topic is also interesting to me because I have small
       | children.
       | 
       | Currently, I view LLMs as huge enablers. They helped me create a
       | side-project alongside my primary job, and they make development
       | and almost anything related to knowledge work more interesting. I
       | don't think they made me think less; rather, they made me think a
       | lot more, work more, and absorb significantly more information.
       | But I am a senior, motivated, curious, and skilled engineer with
       | 15+ years of IT, Enterprise Networking, and Development
       | experience.
       | 
       | There are a number of ways one can use this technology. You can
       | use it as an enabler, or you can use it for cheating. The
       | education system needs to adapt rapidly to address the challenges
       | that are coming, which is often a significant issue (particularly
       | in countries like Hungary). For example, consider an exam where
       | you are allowed to use AI (similar to open-book exams), but the
       | exam is designed in such a way that it is sufficiently difficult,
       | so you can only solve it (even with AI assistance) if you possess
       | deep and broad knowledge of the domain or topic. This is doable.
       | Maybe the scoring system will be different, focusing not just on
       | whether the solution works, but also on how elegant it is. Or, in
       | the Creator domain, perhaps the focus will be on whether the
       | output is sufficiently personal, stylish, or unique.
       | 
       | I tend to think current LLMs are more like tools and enablers. I
       | believe that every area of the world will now experience a boom
       | effect and accelerate exponentially.
       | 
       | When superintelligence arrives--and let's say it isn't sentient
       | but just an expert system--humans will still need to chart the
       | path forward and hopefully control it in such a way that it
       | remains a tool, much like current LLMs.
       | 
       | So yes, education, broad knowledge, and experience are very
       | important. We must teach our children to use this technology
       | responsibly. Because of this acceleration, I don't think the age
       | of AI will require less intelligent people. On the contrary,
       | everything will likely become much more complex and abstract,
       | because every knowledge worker (who wants to participate) will be
       | empowered to do more, build more, and imagine more.
        
       | pugio wrote:
       | I've used AI for one of the best studying experiences I've had in
       | a long time:
       | 
       | 1. Dump the whole textbook into Gemini, along with various
       | syllabi/learning goals.
       | 
       | 2. (Carefully) Prompt it to create Anki flashcards to meet each
       | goal.
       | 
       | 3. Use Anki (duh).
       | 
       | 4. Dump the day's flashcards into a ChatGPT session, turn on
       | voice mode, and ask it to quiz me.
       | 
       | Then I can go about my day answering questions. The best part is
       | that if I don't understand something, or am having a hard time
       | retaining some information, I can immediately ask it to explain -
       | I can start a whole side tangent conversation deepening my
       | understanding of the knowledge unit in the card, and then go
       | right back to quizzing on the next card when I'm ready.
       | 
       | It feels like a learning superpower.
        
       | iteratethis wrote:
       | I think there's ways for teachers to embrace AI in teaching.
       | 
       | Let AI generate a short novel. The student is tasked to read it
       | and criticize what's wrong with it. This requires focus and
       | advanced reading comprehension.
       | 
       | Show 4 AI-generated code solutions. Let the student explain which
       | one is best and why.
       | 
       | Show 10 AI-generated images and let art students analyze flaws.
       | 
       | And so on.
        
       | kaonwarb wrote:
       | While recognizing the material downsides of education in the time
       | of AI, I envy serious students who now have access to these
       | systems. As an engineering undergrad at a research-focused
       | institution a couple decades ago, I had a few classes taught by
       | professors who appeared entirely uninterested in whether their
       | students were comprehending the material or not. I would have
       | given a lot for the ability to ask a modern frontier LLM to
       | explain a concept to me in a different way when the original
       | breezed-through, "obvious" approach didn't connect with me.
        
       | walleeee wrote:
       | > Students primarily use AI systems for creating (using
       | information to learn something new)
       | 
       | this is a smooth way to not say "cheat" in the first paragraph
       | and to reframe creativity in a way that reflects positively on
       | llm use. in fairness they then say
       | 
       | > This raises questions about ensuring students don't offload
       | critical cognitive tasks to AI systems.
       | 
       | and later they report
       | 
       | > nearly half (~47%) of student-AI conversations were Direct--
       | that is, seeking answers or content with minimal engagement.
       | Whereas many of these serve legitimate learning purposes (like
       | asking conceptual questions or generating study guides), we did
       | find concerning Direct conversation examples including: - Provide
       | answers to machine learning multiple-choice questions - Provide
       | direct answers to English language test questions - Rewrite
       | marketing and business texts to avoid plagiarism detection
       | 
       | kudos for addressing this head on. the problem here, and the
       | reason these are not likely to be democratizing but rather wedge
       | technologies, is not that they make grading harder or violate
       | principles of higher education but that they can disable people
       | who might otherwise learn something
        
       ___________________________________________________________________
       (page generated 2025-04-09 23:00 UTC)