[HN Gopher] Metacognitive laziness: Effects of generative AI on ...
       ___________________________________________________________________
        
       Metacognitive laziness: Effects of generative AI on learning
       motivation
        
       Author : freddier
       Score  : 263 points
       Date   : 2025-01-21 13:47 UTC (9 hours ago)
        
 (HTM) web link (bera-journals.onlinelibrary.wiley.com)
 (TXT) w3m dump (bera-journals.onlinelibrary.wiley.com)
        
       | byyoung3 wrote:
       | its increasing my curiosity because it allows me to run more
       | experiments
        
         | sitkack wrote:
         | The paper says that LLM usage doesn't appear to move baseline
         | curiosity. Thanks aithrowawaycomm for
         | https://arxiv.org/pdf/2412.09315
         | 
         | Ridiculous that academic work on the technology of education is
         | behind a paywall and not open access. Stinks.
        
         | thecupisblue wrote:
         | Exactly this. While I might scratch the surface of some topics,
         | it helps me cast a wider net of cognitive exploration in the
         | free time I have. This in turn leads me into deeper rabbit
         | holes for things that pique my interest, leading to faster
         | iteration of the knowledge tree, while also providing me with a
         | way to estimate my understanding of the topic.
        
       | diggan wrote:
       | > What is particularly noteworthy is that AI technologies such as
       | ChatGPT may promote learners' dependence on technology and
       | potentially trigger "metacognitive laziness". In conclusion,
       | understanding and leveraging the respective strengths and
       | weaknesses of different agents in learning is critical in the
       | field of future hybrid intelligence.
       | 
       | Maybe I'm trying to read and understand it too quickly, but I
       | don't see anything in the abstract that supports that strong
       | conclusion.
       | 
       | > The results revealed that: (1) learners who received different
       | learning support showed no difference in post-task intrinsic
       | motivation; (2) there were significant differences in the
       | frequency and sequences of the self-regulated learning processes
       | among groups; (3) ChatGPT group outperformed in the essay score
       | improvement but their knowledge gain and transfer were not
       | significantly different. Our research found that in the absence
       | of differences in motivation, learners with different supports
       | still exhibited different self-regulated learning processes,
       | ultimately leading to differentiated performance.
       | 
       | The ChatGPT group performed better on essay scores, they showed
       | no deficit in knowledge gain or transfer, but they showed
       | different self-regulated learning processes (not worse or better,
       | just different?).
       | 
       | If anything, my own conclusion from the abstract would be that
       | ChatGPT is helpful as a learning tool as it helped them improve
       | essay scores without compromising knowledge learning. But again,
       | I only read the abstract, maybe they go into more details in the
       | paper that make the abstract make more sense.
        
         | izend wrote:
         | I have found ChatGPT is pretty good at explaining topics when
         | the source documentation is poorly written or lacks examples.
         | Obviously it does make mistakes so skepticism in the output is
         | a good idea.
        
         | jmann99999 wrote:
         | I drew a similar conclusion from the abstract as you. The only
         | negative I could think out of that is with higher essay scores,
         | one might expect higher knowledge gain, and that wasn't
         | present.
         | 
         | However, I agree that that doesn't really seem to be a negative
         | over other methods.
        
         | sitkack wrote:
         | I have been using LLMs for my own education since they came out
         | and have watched my kid use it.
         | 
         | Some kids might pickup a calculator and then use it to see
         | geometric growth, or look for interesting repeating patterns of
         | numbers.
         | 
         | Another kid might just use it to get their homework done faster
         | and then run outside and play.
         | 
         | The second kid isn't learning more via the use of the tool.
         | 
         | So the paper warns that the use of LLMs doesn't necessarily
         | change what the student is interested in and how they are
         | motivated. That we might need to put in checks for how the tool
         | is being used into the tool to reduce the impact of scenario 2.
        
         | apercu wrote:
         | I don't really know what "metacongnitive laziness" is even
         | after they explain it in the paper, but I use LLMs to filter
         | noise and help automate the drudgery of certain tasks, allowing
         | me to use my energy and peak focus time on the more complicated
         | tasks. Anecdotal, obviously. But I don't see how this hinders
         | me in my ability to "self-regulate". It's just a tool, like a
         | hammer.
         | 
         | From a learning perspective, it can also be a short cut to
         | getting something explained in several different ways until the
         | concept "clicks".
        
           | danielbln wrote:
           | I also appreciate being able to tell the LLM "look, it's
           | late, I'm tired, really dumb this down for me" and it does
           | it.
        
         | felideon wrote:
         | Yeah, the abstract could use a bit more work. The gist of it is
         | being in a closed-loop cycle with ChatGPT only helps with the
         | task at hand, and not with engaging with the full learning
         | process. Instead they say "When using AI in learning, learners
         | should focus on deepening their understanding of knowledge and
         | actively engage in metacognitive processes such as evaluation,
         | monitoring, and orientation, rather than blindly following
         | ChatGPT's feedback solely to complete tasks efficiently."
        
       | aithrowawaycomm wrote:
       | Preprint: https://arxiv.org/abs/2412.09315
        
         | sitkack wrote:
         | Thanks for the link, but clearly no one is reading it. Which is
         | super ironic, they aren't even summarizing it with AI and using
         | that information.
         | 
         | Most folks are projecting what the title says into their own
         | emotion space and then riffing on that.
         | 
         | The authors even went so far as to boil the entire paper down
         | into bullet points, you don't even need the pdf.
        
           | felideon wrote:
           | > Most folks are projecting what the title says into their
           | own emotion space and then riffing on that.
           | 
           | Yeah, or the abstract which is a bit vague.
        
             | sitkack wrote:
             | The bullet points below the abstract is basically the paper
             | w/o reading it.
        
       | spatalo wrote:
       | same is true for google, gps, etc.
        
       | lr4444lr wrote:
       | The abstract does not define, nor contextually suggest from the
       | prior statements of the results what "metacognitive laziness"
       | means.
       | 
       | Personally speaking, I find being able to ask ChatGPT continually
       | more nuanced questions about an initial answer the one clear
       | benefit over a Google search, where I have diminishing marginal
       | returns on my inquisitiveness for the time invested over
       | subsequent searches. The more precisely I am able to formulate my
       | question on a traditional search engine, the harder it is for
       | non-SEO optimized results to appear: it's either meant more for a
       | casual reader with no new information, or is a very specialized
       | resource that requires extensive professional background
       | knowledge. LLMs really build that bridge to precisely the answers
       | I want.
        
         | jcims wrote:
         | This is my take as well.
         | 
         | There was a story a couple days ago about a neural network
         | built on a single photonic chip. I fed the paper to ChatGPT and
         | was able to use it to develop a much more meaningful and
         | comprehensive understanding of what the chip actually
         | delivered, how it operated, the fundamental operating
         | principles of core components and how it could be integrated
         | into a system.
         | 
         | The fact that I now have a tireless elucidator on tap to help
         | explore a topic (hallucination caveats notwithstanding)
         | actually increases my motivation to explore dense technical
         | information and understanding of new concepts.
         | 
         | The one area where I do think it is detrimental is my
         | willingness to start writing content on a provebial blank sheet
         | of paper. I explore the topic with ChatGPT to get a rough
         | outline, maybe some basic content and then take it from there.
        
           | squigz wrote:
           | > (hallucination caveats notwithstanding)
           | 
           | This is a pretty big caveat to the goal of
           | 
           | > develop a much more meaningful and comprehensive
           | understanding
           | 
           | Which is still my biggest issue with LLMs. The little I use
           | of them, the answers are still confidently wrong a lot of the
           | time. Has this changed?
        
             | tyzoid wrote:
             | I've found them to be quite accurate when given enough
             | context data. For ex, feeding it an article into it's
             | context window and asking questions about it. Relying on
             | the LLM's internal trained knowledge state seems to be less
             | reliable.
        
             | setsewerd wrote:
             | I use ChatGPT a lot each day for writing and organizing
             | tasks, and summaries/explanations of articles etc.
             | 
             | When dealing with topics I'm familiar with, I've found the
             | hallucinations have dropped substantially in the last few
             | years from GPT2 to GPT3 to GPT4 to 4o, especially when web
             | search is incorporated.
             | 
             | LLMs perform best in this regard when working with existing
             | text that you've fed them (whether via web search or
             | uploaded text/documents). So if you paste the text of a
             | study to start the conversation, it's a pretty safe bet
             | you'll be fine.
             | 
             | If you don't have web search turned on, I'd still avoid
             | treating the chat as a search engine though, because 4o
             | will still get little details wrong here and there,
             | especially for newer or more niche topics that wouldn't be
             | as well-represented in the training data.
        
             | bloopernova wrote:
             | I've found that whatever powers Kagi.com's answer seems to
             | be pretty accurate. It cites articles and other sources.
             | 
             | Trying a share link, hope it works:
             | 
             | https://kagi.com/search?q=what+factors+affect+the+freezing+
             | p...
        
               | freediver wrote:
               | What powers it is Kagi Search :) All chatbots have access
               | to similar models, what distinguishes the answer quality
               | is/will be the quality of search results fed to them.
        
             | jcims wrote:
             | I agree in general but the way this has worked for me in
             | practice is that I approach things hierarchically up and
             | down. Any specific hallucinations tend to come out in the
             | wash as the same question is asked from different layers of
             | abstraction.
        
           | epolanski wrote:
           | On the other hand you might be getting worse at reading those
           | papers yourself.
           | 
           | The more youngsters skip the hassle of banging their heads on
           | some topic the less able they will be to learn at later age.
           | 
           | There's more to learning than getting information, it's also
           | about processing it (which we are offloading to LLMs). In
           | fact I'd say that the whole point of going through school is
           | to learn how to process and absorb information.
           | 
           | That might be the cognitive laziness.
        
             | parpfish wrote:
             | What if the LLMs are teaching us that long form
             | prose/technical writing is just a really bad, unnatural
             | format for communication but natural dialogues are a good
             | format?
        
               | amrocha wrote:
               | If that was the case every scientific paper would be
               | written as socratic dialogue. But it's not. Because
               | that's a good format for beginners, but not for science.
        
               | parpfish wrote:
               | the reason the current format exists and is used is
               | because it's very information dense. i think scientific
               | papers would be better if they were socratic dialogues.
               | 
               | but the limitation in publishing a dialogue is that you'd
               | just get to publish one of them and each reader is going
               | to come in with different questions and goals for what
               | they want out of the paper.
        
               | epolanski wrote:
               | The way I see it it is sort of like debugging code you're
               | not well accustomed with.
               | 
               | While you're still going to learn whether you go through
               | the hassle of understanding the system, develop a method
               | for debugging it and learning about it along the way...
               | 
               | Of course a senior could point you to the issue right
               | away, probably an llm too, and even provide a learning
               | opportunity, but does it hold the same lasting impact of
               | being able to overcome the burden yourself?
               | 
               | Which one makes a more lasting effect on your abilities
               | and skills?
               | 
               | Again, LLMs are a tool, but if people in school/college
               | start using it to offload the reasoning part they are not
               | developing it themselves.
        
             | cube2222 wrote:
             | Sure, same as I'm probably pretty bad at going to the
             | library and looking up information there, with the advent
             | of the internet.
             | 
             | In practice, this lets you reasonably process the knowledge
             | from a lot more papers than you otherwise would, which I
             | think is a win. The way we learn is evolving, as it has in
             | the past, and that's a good thing.
             | 
             | Though I agree that this will be another way for lazy
             | children to avoid learning (by just letting AI do the
             | exercises), and we'll need to find a good solution for
             | that, whatever it may be.
        
               | miltonlost wrote:
               | Not being able to glean information from a paper is
               | wildly different than being unable to use a card catalog.
               | The former is basic reading comprehension; the latter is
               | a technology.
               | 
               | You AREN'T learning what that paper is saying; you're
               | learning parts of what the LLM says is useful.
               | 
               | If you read just theorems, you aren't learning math. You
               | need to read the proof too, and not just a summary of the
               | proof.
        
             | jcims wrote:
             | I do read the paper, but when you run into dense
             | explanations like this:
             | 
             | >To realize a programmable coherent optical activation
             | function, we developed a resonant electro-optical
             | nonlinearity (Fig. 1(iii)). This device directs a fraction
             | of the incident optical power |b|2 into a photodiode by
             | programming the phase shift th in an MZI. The photodiode is
             | electrically connected to a p-n-doped resonant microring
             | modulator, and the resultant photocurrent (or photovoltage)
             | detunes the resonance by either injecting (or deplet-ing)
             | carriers from the waveguide.
             | 
             | It becomes very difficult to pick apart each thing, find a
             | suitable explanation of what the thing (eg. MZI splitter,
             | microring modulator, how a charge detunes the resonance of
             | the modulator) is or how it contributes to the whole.
             | 
             | Picking these apart and recombining them with the help of
             | something like ChatGPT has given me a very rapid drill-down
             | capability into documents like this. Then re-reading it
             | allows me to intake the information in the way its
             | presented.
             | 
             | If this type of content was material to my day job it would
             | be another matter, but this is just hobby interest. I'm
             | just not going to invest hours trying to figure it out.
        
         | bluefirebrand wrote:
         | > LLMs really build that bridge to precisely the answers I
         | want.
         | 
         | It is interesting that you describe this as "the answers you
         | want" and not "the correct answer to the question I have"
         | 
         | Not criticising you in particular, but this does sound to me
         | like this approach has a good possibility of just reinforcing
         | existing biases
         | 
         | In fact the approach sounds very similar to "find a wikipedia
         | article and then go dig through the sources to find the
         | original place that the answers I want were published"
        
           | pragmar wrote:
           | Agreeable LLMs and embedded bias are surely a risk, but I
           | don't think this a helpful frame. Most questions don't have
           | correct answers, so it would follow that you'd want practical
           | answers for those, and correct answers for the remainder.
        
           | lr4444lr wrote:
           | Though I think you're reading more into my phrasing than I
           | meant, the overall skepticism is fair.
           | 
           | One thing I do have to be mindful of is asking the AI to
           | check for alternatives, for dissenting or hypothetical
           | answers, and sometimes I just ask it to rephrase to check for
           | consistency.
           | 
           | But doing all of that still takes way less time than
           | searching for needles buried by SEO optimized garbage and
           | well meaning but repetitious summaries.
        
             | bluefirebrand wrote:
             | > Though I think you're reading more into my phrasing than
             | I meant, the overall skepticism is fair
             | 
             | I do want to re-iterate that I didn't intend to accuse you
             | of only seeking to reinforce your biases
             | 
             | I read into your phrasing not to needle you, but because it
             | set off some thoughts in my head, that's all
             | 
             | Thanks for being charitable with your reply, and I
             | appreciate your thoughts
        
           | scarface_74 wrote:
           | > It is interesting that you describe this as "the answers
           | you want" and not "the correct answer to the question I have"
           | 
           | "Verify that" and then ChatGPT will do a real time search and
           | I can read web pages. Occasionally, it will "correct itself"
           | once it does a web search
        
         | jprete wrote:
         | In the absence of a definition I'd read it straightforwardly -
         | it means that someone stops making an effort to learn better
         | ways to learn. I.e. if they start using chatbots to learn, they
         | stop practicing other methods and just rely on the chatbot.
         | (EDIT: I realize now that this probably isn't news to the
         | parent!)
         | 
         | I've heard stories of junior engineers falling into this trap.
         | They asked the chatbot everything rather than exposing their
         | lack of knowledge to their coworkers. And if the chatbot avoids
         | blatant mistakes, junior engineers won't recognize when the bot
         | makes a subtle one.
        
           | sitkack wrote:
           | That is why the last step should always be how do I know what
           | I know? What are my blind spots?
           | 
           | If I am not motivated to find them and test my own knowledge,
           | how do I change that motivation?
        
         | apercu wrote:
         | Even though ChatGPT "invents" its own reality sometimes, I also
         | find it superior to Google search results (or Duck Duck Go). In
         | some cases LLM results even provide specific strings to search
         | for in the search engines to verify the content. Search is
         | terribly broken and has been since around 2014 (arbitrary date)
         | where Google search results pages started displaying more ads
         | than results.
        
           | scarface_74 wrote:
           | Paid ChatGPT has had web search capabilities for two years at
           | least
        
         | Davidbrcz wrote:
         | In that context metacognitive process are the processes used to
         | plan, monitor, and assess one's understanding and performances.
         | 
         | So metacognitive lazyness would be the lack of such processes
        
         | miltonlost wrote:
         | >The abstract does not define, nor contextually suggest from
         | the prior statements of the results what "metacognitive
         | laziness" means.
         | 
         | Your comment seems like a good example of metacognitive
         | laziness: not bothering to formulate your own definition from
         | the examples in the abstract and the meaning of the words
         | themselves. Slothful about the the process of thinking for
         | yourself.
        
           | lr4444lr wrote:
           | I reread the abstract 3 times. The results stated prior to
           | that definition simply don't follow consistently with the
           | component meaning of those two words as I understand them.
           | 
           | The writer has the responsibility to be clear.
        
         | layer8 wrote:
         | Further down they write (emphasis mine):
         | 
         | > When using AI in learning, learners should focus on deepening
         | their understanding of knowledge and actively engage in
         | _metacognitive processes such as evaluation, monitoring, and
         | orientation_ , rather than blindly following ChatGPT's feedback
         | solely to complete tasks efficiently.
        
       | iambateman wrote:
       | "The kids these days are too lazy to be bothered to learn" is a
       | psychological trap that people often fall into.
       | 
       | It's not to say we shouldn't do our best to understand and
       | provide guardrails, but the kids will be fine.
        
         | jerf wrote:
         | Can you point me to the generation that had ready access to AI
         | on their hands, answering all their questions?
         | 
         | "People have been complaining about this for thousands of
         | years" is a potent counterargument to a lot of things, but it
         | can't be applied to things that really didn't exist even a
         | decade ago.
         | 
         | Moreover, the thing that people miss about "people have been
         | complaining about this for thousands of years" is that the
         | complaints have often been valid, too. Cultures have fallen.
         | Civilizations have collapsed. Empires have disintegrated. The
         | complaints were not all wrong!
         | 
         | And that's on a civilization-scale. On a more mundane day-to-
         | day scale, people have been individually failing for precisely
         | the same reasons people were complaining about for a long time.
         | There have been lazy people who have done poorly or died
         | because of it. There have been people who refused to learn who
         | have done poorly or died because of it.
         | 
         | This really isn't an all-purpose "just shrug about it and move
         | on, everything's been fine before and it'll be fine again". It
         | hasn't always been fine before, at any scale, and we don't know
         | what impact unknown things will have.
         | 
         | To give a historical example... nay, a _class_ of historical
         | examples... there are several instances of a new drug being
         | introduced to a society, and it ripping through that society
         | that had no defenses against it. Even when the society survived
         | it, it did so at great individual costs, and  "eh, we've had
         | drugs before" would not have been a good heuristic to
         | understand the results with. I do not know that AIs just
         | answering everything is similar, but at the moment I certainly
         | can't prove it isn't either.
        
         | helboi4 wrote:
         | I mean sometimes it's true. Like even in the past. I could very
         | clearly see amongst my generation (older gen z) that there were
         | plenty of people literally at university who were barely
         | willing or able to learn. Comparing that to the generation of
         | my much older half siblings (genx, older millennial), they
         | don't even seem to grasp the concept of not being quite
         | involved in your university degree.
         | 
         | Most people my age will tell you that they stopped reading as a
         | teenager because of the effect of smartphones. I was a
         | veracious reader and only relearnt to read last year after 10
         | years since I got my first smartphone as an older teenager.
         | These things are impactful and have affected a lot of people's
         | potential. And also made our generation very prone to mental
         | health issues - something that is really incredibly palpable if
         | you are within gen z social circles like I am. It's disastrous
         | and cannot be overstated. I can be very sure I would be smarter
         | and happier if technology had stagnated at the level it was at
         | when I was a younger child/teen. The old internet and personal
         | computers, for example, only helped me explore my curiosity.
         | Social media and smartphones have only destroyed it. There are
         | qualitative differences between some technological
         | advancements.
         | 
         | Not to mention the fact that gen alpha are shown to have
         | terrible computer literacy because of the ease of use,
         | discouragement of customisation and corporate monopoly over
         | smartphones. This bucks the trend that happened from gen x to
         | gen z of generations become more and more computer native.
         | Clearly, upwards trends in learning due to advancements in
         | technology can be reversed. They do not always go up.
         | 
         | If kids do not learn independent reasoning because of reliance
         | on LLMs, yes, that will make people stupider. Not all
         | technology improves things. I watched a really great video
         | recently where someone explained the change in the nature of
         | presidential debates through the ages. In the Victorian times,
         | they consisted of hours-long oratory on each side, with
         | listeners following attentively. In the 20th century the
         | speeches gradually became a little shorter and more questions
         | were added to break things up. In most recent times, every
         | question has started to come with a less than a minute answer,
         | simpler vocabulary, little hard facts or statistics etc. These
         | changes map very well to changes in the depth at which people
         | were able to think due to the primary information source they
         | were using. There is a good reason why reading is still seen as
         | the most effective form of deep learning despite technological
         | advancement. Because it is.
        
       | agentultra wrote:
       | So humans are supposed to review all of the code that GenAI
       | creates. We're supposed to ensure that it doesn't generate
       | (obvious?) errors and that it's building the "right thing" in a
       | manner prescribed by our requirements.
       | 
       | The anecdotes from practitioners using GenAI in this way suggest
       | it's a good tool for experienced developers because they know
       | what to look out for.
       | 
       | Now we admit folks who don't know what they're doing and are in
       | the process of learning. They don't know what to look out for.
       | How does this tech help them? Do they know to ask what a use-
       | after-free is or how cache memory works? Do they know the names
       | of the algorithms and data structures? Do they know when the
       | GenAI is bullshitting them?
       | 
       | Studies such as this are hard but important. Interesting one here
       | even though the sample is small. I wonder if anyone can repeat
       | it.
        
         | diggan wrote:
         | > Now we admit folks who don't know what they're doing and are
         | in the process of learning. They don't know what to look out
         | for. How does this tech help them? Do they know to ask what a
         | use-after-free is or how cache memory works? Do they know the
         | names of the algorithms and data structures? Do they know when
         | the GenAI is bullshitting them?
         | 
         | You can know enough in X to allow you to do Y together with X,
         | which you might not have been able to before.
         | 
         | For example, I'm a programmer, but horrible at math. I want to
         | develop games, and I technically could, but all the math stuff
         | makes it a lot harder sometimes to make progress. I've still
         | managed to make and release games, but math always gets in the
         | way. I know exactly how I want it to behave and work, but I
         | cannot always figure out how to get there. LLMs help me a lot
         | with this, where I can isolate those parts into small black
         | boxes that I know they give me the right thing, but not 100%
         | sure about how. I know when the LLM gives me the incorrect
         | code, because I know what I'm looking for and why, only missing
         | the "how" part.
         | 
         | Basically like having 3rd party libraries you don't fully
         | understand the internals of, but can still use granted you
         | understand the public API, except you keep in your code base
         | and pepper it with unit tests.
        
         | squigz wrote:
         | > Do they know to ask what a use-after-free is or how cache
         | memory works? Do they know the names of the algorithms and data
         | structures? Do they know when the GenAI is bullshitting them?
         | 
         | No, which is why people who don't pick up on the nuances of
         | programming - no matter how often they use LLMs - will never be
         | capable programmers.
        
         | probably_wrong wrote:
         | > _Do they know when the GenAI is bullshitting them?_
         | 
         | Anecdote from a friend who teaches CS: this year a large number
         | of students started adding unnecessary `break` instructions to
         | their C code, like so:                 while (condition) {
         | do_stuff();         if (!condition) {           break;
         | }       }
         | 
         | They asked around and realized that the common thread was
         | ChatGPT - everyone who asked how loops work got a variation of
         | "use break() to exit the loop", so they did.
         | 
         | Given that this is not how you do it in CS (not only it's
         | unnecessary, but it also makes your formal proofs more complex)
         | they had to make a general one-time exception and add
         | disclaimers in exams reminding them to do it "the way you were
         | taught in class".
        
           | elpocko wrote:
           | >use break() to exit the loop
           | 
           | Well - they know that break is not a function and you don't.
           | Thanks ChatGPT.
        
           | agentultra wrote:
           | A colleague of mine once taught a formal methods course for
           | students working on their masters -- not beginners by any
           | stretch.
           | 
           | The exercise was to implement binary search given the
           | textbook specification without any errors. An algorithm they
           | had probably implemented in their first-year algorithms
           | course at the very least. The students could write any tests
           | they liked and add any assertions they thought would be
           | useful. My colleague verified each submission against a
           | formal specification. The majority of submission contained
           | errors.
           | 
           | For a simple algorithm that a student at that level could be
           | reasonably expected to know well!
           | 
           | Now... ChatGPT and other LLM-based systems, as far as I
           | understand, cannot do formal reasoning on their own. It
           | cannot tell you, with certainty, that your code is correct
           | with regards to a specification. And it can't tell you if
           | your specification contains errors. So what are students
           | learning using these tools?
        
             | Der_Einzige wrote:
             | Given that most binary searches have an overflow error
             | built in, I think it's harder than a first year problem to
             | do binary searches without the classical overflow error...
        
           | marcosdumay wrote:
           | You take a few points from the students that posted inane
           | code by following the LLM, and those students will learn to
           | never blindly follow an LLM again.
        
           | photochemsyn wrote:
           | If you take the generated code snippets and ask something
           | like "There may or may not be something syntactically or
           | stylistically wrong with the following code. Try to identify
           | any errors or unusual structures that might come up in a
           | technical code review.", then it usually finds any problems
           | or at least, differences of opinion on what the best approach
           | is.
           | 
           | (This might work best if you have one LLM critique the code
           | generated by another LLM, eg bouncing back and forth between
           | Claude and ChatGPT)
        
             | danielbln wrote:
             | Some tools have also started to support a one-two punch of
             | asking a reasoning model (o1 or R1 etc) for planning the
             | solution, and a chat model to build it. Works quite well.
        
       | MomsAVoxell wrote:
       | I feel this, because it's like I don't need to know about
       | something, I just need to know how to know about something. Like,
       | the initial contact with a mystery subject is overcome by knowing
       | how to describe the mystery in a way that AI understands what I
       | don't understand, and seeks to fill in the understanding.
       | 
       | An example, I have no clue about React. I do know why I don't
       | like to use React and why I have avoided it over the years. I
       | describe to some ML tool the difficulties I've had learning React
       | and using it productively .. and voila, it plots a chart through
       | the knowledge that, kinda, makes me want to learn React and use
       | it.
       | 
       | It's like, the human ability to form an ontology in the face of
       | mystery even if it is in accurate or faulty, allows the AI to
       | take over and plot an ontological route through the mystery into
       | understanding.
       | 
       | Another thing I realized lately, as ML has taken over my critical
       | faculties, is that it's really only useful for things that are
       | already known by others. I can't ask ML to give me some new,
       | groundbreaking idea about something - everything it suggests has
       | already been thought, somewhere, by a real human - and this its
       | not new or groundbreaking. It's just contextually - in my own
       | local ontological universe - filling in a mystery gap.
       | 
       | Pretty fun times we're having, but I do fear for the generations
       | that will know and understand no other way than to have ML
       | explain things for them. I don't think we have the ethics tools,
       | as cultures and societies, to prevent this from becoming a
       | catastrophe of glib, knowledge-less folks, collapsing all
       | knowledge into a raging dumpster fire of collective reactivity,
       | but I hope someone is training a model, somewhere, to rescue us
       | from this, somehow ..
        
         | llm_trw wrote:
         | > But when they came to writing, Theuth said: "O King, here is
         | something that, once learned, will make the Egyptians wiser and
         | will improve their memory; I have discovered a potion for
         | memory and for wisdom." Thamus, however, replied: "O most
         | expert Theuth, one man can give birth to the elements of an
         | art, but only another can judge how they can benefit or harm
         | those who will use them. And now, since you are the father of
         | writing, your affection for it has made you describe its
         | effects as the opposite of what they really are. In fact, it
         | will introduce forgetfulness into the soul of those who learn
         | it: they will not practice using their memory because they will
         | put their trust in writing, which is external and depends on
         | signs that belong to others, instead of trying to remember from
         | the inside, completely on their own. You have not discovered a
         | potion for remembering, but for reminding; you provide your
         | students with the appearance of wisdom, not with its reality.
         | Your invention will enable them to hear many things without
         | being properly taught, and they will imagine that they have
         | come to know much while for the most part they will know
         | nothing. And they will be difficult to get along with, since
         | they will merely appear to be wise instead of really being so.
         | 
         | --Socrates on writing
        
           | diggan wrote:
           | That's a interesting and very fitting quote. Basically saying
           | that since we can now write down information, people will get
           | lazier about remembering things. Basically the exact same
           | claim as the submission article.
        
             | MomsAVoxell wrote:
             | I think there is some validity to the nature of
             | generational knowledge loss through differing information
             | systems. At one end of the scale, you've got 80,000 year
             | old stories, still being told - at the other end of the
             | scale, you've got App Of The Day(tm) style social media,
             | and kids who can't write an email, use a dictionary, or
             | read a book.
             | 
             | This is no hyperbole - humans have to constantly fight the
             | degeneracy of our knowledge systems, which is to say that
             | knowledge has to be generated and communicated - it can't
             | just "exist" and be useful, it has to be applied to be
             | useful. Technology of knowledge which doesn't get applied,
             | does not persist, or if it does (COBOL), what once was
             | common becomes arcane.
             | 
             | So, if there is hope, it lays with the proles: the way
             | every-day people use ML, is probably the key to all of
             | this. It's one thing to know how to prompt an LLM to give
             | you a buildable source tree; its another thing entirely to
             | use it somehow to figure out what to make out of the
             | leftover ingredients in the fridge.
             | 
             | Those recipes and indeed the applications of the
             | ingredients, are based on human input and mores.
             | 
             | So the question for me, still really unanswered, is: How
             | long will it take until those fridge-ingredient recipes
             | become bland, tasteless and grey?
             | 
             | I think this belies the imperative that AL and ML must
             | _never_ become so pervasive that we don't, also, write
             | things down for ourselves. Oh, and read a lot, of course.
             | 
             | It seems, we need to stop throwing books away. Oh, and
             | encourage kids to cook, and create their own recipes...
             | hopefully they'll have time and resources for that kind of
             | lifestyle...
        
           | barrenko wrote:
           | Socrates is just the next monkey in line. As human monkeys,
           | we have already traded (short-term) memory for abstract
           | thinking and who knows what else.
           | 
           | I guess that is the curse of evolution/specialization.
        
             | MomsAVoxell wrote:
             | No doubt, this curse (which is also missing generalization,
             | i.e. evolution/generalization/specialization) is all for
             | the sake of self-awareness, or at least, awareness, of some
             | particular thing.
             | 
             | As long as humans remain aware that they are engaging with
             | an AI/ML, we might still have a chance. Computers
             | definitely need to be identifiable as such.
        
       | giancarlostoro wrote:
       | How's this any different than someone 5+ years ago blindly going
       | by whatever a Google result said about anything? I've run into
       | conflicting answers to things off Google's first page of results,
       | some things aren't 100% certain and require more research.
       | 
       | I'm not surprised if this will make some lazier since you don't
       | need to do the legwork of reading, but how many don't read only
       | the headlines of articles before they share articles?
        
         | diggan wrote:
         | > How's this any different than someone 5+ years ago blindly
         | going by whatever a Google result said about anything
         | 
         | You can interrogate it at least. "Are you sure that's the
         | correct answer? Re-think from the beginning without any
         | assumptions" and you'll get a checklist you can
         | mentally/practically go through yourself to validate.
        
           | giancarlostoro wrote:
           | True, so I guess what needs to happen is people using AI need
           | to be informed on how to use it more accurately so they're
           | actually learning source material and not just taking garbage
           | / cheating on coursework.
        
             | sitkack wrote:
             | So we need to train inquisitive curious thinkers that look
             | at things from all angles and understand why they know
             | something.
        
               | giancarlostoro wrote:
               | A friend and myself were talking not too long ago that
               | people these days don't practice critical thinking. It
               | might be worthwhile for not just schools but parents
               | everywhere to teach their kids to think more critically,
               | ask the right questions when presented with new
               | information.
        
         | lm28469 wrote:
         | Differences of degree, not of kind
        
         | baal80spam wrote:
         | > How's this any different than someone 5+ years ago blindly
         | going by whatever a Google result said about anything?
         | 
         | It has "AI" in the title, so it's a hot take.
        
         | tempest_ wrote:
         | Part of it I think is the confidence with which LLMs return
         | answers
        
         | StefanBatory wrote:
         | For code; at least you would have to copy and paste it, and
         | then modify, even if ever so slightly, to make it fit your
         | code.
         | 
         | Now, "Claude, fix that for me".
        
       | readyplayernull wrote:
       | > Our research found that in the absence of differences in
       | motivation, learners with different supports still exhibited
       | different self-regulated learning processes, ultimately leading
       | to differentiated performance.
       | 
       | That's the most convoluted conclusion I've ever seen.
       | 
       | > What is particularly noteworthy is that AI technologies such as
       | ChatGPT may promote learners' dependence on technology and
       | potentially trigger "metacognitive laziness".
       | 
       | Calculator laziness is long known. It doesn't cause meta- but
       | specific- laziness.
        
       | empathy_m wrote:
       | Cell phones and laptops in general have changed a couple of
       | things for me, as someone who grew up without them:
       | 
       | - I realized about 20y-25y ago that I could run a Web search and
       | find out nearly any fact, probably one-shot but maybe with 2-3
       | searches' worth of research
       | 
       | - About 10-15y ago I began to have a connected device in my
       | pocket that could do this on request at any time
       | 
       | - About 5y ago I explicitly *stopped* doing it, most of the time,
       | socially. If I'm in the middle of a conversation and a question
       | comes up about a minor fact, I'm not gonna break the flow to pull
       | out my screen and stare at it and answer the question, I'm gonna
       | keep hanging out with the person.
       | 
       | There was this "pub trivia" thing that used to happen in the 80s
       | and 90s where you would see a spirited discussion between people
       | arguing about a small fact which neither of them immediately had
       | at hand. We don't get that much anymore because it's so easy to
       | answer the question -- we've just totally lost it.
       | 
       | I don't miss it, but I have become keenly aware of how tethered
       | my consciousness is to facts available via Web search, and I
       | don't know that I love outsourcing that much of my brain to
       | places beyond my control.
        
         | jprete wrote:
         | A long time ago I had the idea that maybe Guinness started a
         | "book of world records" precisely because it answers exactly
         | the kind of question that will routinely pop up at the pub.
        
           | dwater wrote:
           | Yes.
           | 
           | https://www.guinnessworldrecords.com/about-us/our-story
        
             | chrisco255 wrote:
             | Wow I had no idea the name literally came from Guinness
             | beer. Brilliant!
        
         | indoordin0saur wrote:
         | I'm just old enough to remember pub trivia before it was
         | possible to look things up with a phone. I firmly maintain that
         | phones ruined pub trivia.
        
           | wussboy wrote:
           | I agree but I think we shouldn't limit this answer to pub
           | trivia. What other aspect of human society and civil
           | discourse did we lose because we never argue or discuss any
           | more?
        
             | chrisco255 wrote:
             | It turns out the internet has created more things to argue
             | about than it destroyed.
        
             | indoordin0saur wrote:
             | Well it certainly sucks in cases where someone "fact
             | checks" you but they do so before a broader discussion has
             | given them enough context to even know what to google or
             | ask the bot.
        
           | cezart wrote:
           | Depends on the pub. Where we play there is a gentlemen's
           | agreement that no one uses phones to help them answer
           | questions
        
             | indoordin0saur wrote:
             | Sure, but that ruins the ability to just pop into a pub and
             | play with people you don't know (let alone trust).
             | 
             | I have this business idea for a pub in a faraday cage that
             | would make cheating impossible for pub trivia (added bonus:
             | also removes any other reason for anyone to be on their
             | phones!)
        
         | MetaWhirledPeas wrote:
         | > There was this "pub trivia" thing that used to happen in the
         | 80s and 90s where you would see a spirited discussion between
         | people arguing about a small fact which neither of them
         | immediately had at hand. We don't get that much anymore because
         | it's so easy to answer the question -- we've just totally lost
         | it.
         | 
         | A good example, but imagine the days of our ancestors:
         | 
         |  _Remember that game we used to play, where we 'd find out who
         | could see birds from the farthest distance? Yeah, glasses
         | ruined that._
        
         | StefanBatory wrote:
         | Take a small notebook, Anki flashcards, or even small notes.
         | 
         | And work on learning some trivia purely to help you out with
         | memory.
        
       | roydivision wrote:
       | This stands to reason. If you need the answer to a question, and
       | you can either get it directly, or spend time researching the
       | answer, you're going to learn much more with the latter approach
       | than the former. You may be disciplined enough to do more
       | research if the answer is directly presented to you, but most
       | people will not do that, and most companies are not interested in
       | that, they want quick 'efficient', 'competitive' solutions. They
       | aren't considering the long term downside to this.
        
         | portaouflop wrote:
         | What is the long term downside in your opinion?
        
           | metalliqaz wrote:
           | I believe he implied by saying:
           | 
           | > you're going to learn much more with the latter approach
           | than the former
           | 
           | that the downside is a lack of deep knowledge that would
           | enable better solutions in the long term
        
             | roydivision wrote:
             | Yes, the downside is that we aren't really learning
             | anything, just solving problems supported by machines that
             | tell us the solutions. Any schmuck can do that.
        
               | reginald78 wrote:
               | I think it is worse. Information will dry up (in a
               | variety of ways) making it much harder to even learn the
               | traditional way as we could in the past.
        
         | agumonkey wrote:
         | that's why I mostly use chatgpt with platonic questions like
         | 
         | - given context c, i tried idea a, b and c. where there other
         | options that I miss ?
         | 
         | - based on this plan, do you see missing efficiency ?
         | 
         | etc etc
         | 
         | i'm not seeking answers, i'm trying to avoid costly dead ends
        
           | roydivision wrote:
           | I think you are in a minority, you WANT to learn.
        
             | agumonkey wrote:
             | probably, or should I say, I don't want to rot.. It's true
             | that I love the feeling of learning mostly on my own, but i
             | can be lazy too, it's just that I see a parallel with
             | abusing chatgpt and never doing any physical activity.
        
           | hb-robo wrote:
           | Same here. I never really consciously saw it as "defiance"
           | against cognitive decline or anything. More to the point, the
           | answers are much better on average
        
         | engineer_22 wrote:
         | We have accounts from the ancient Greeks of the old-school's
         | attitude towards writing. In the deep past, they maintained an
         | oral tradition, and scholars were expected to memorize
         | everything. They saw writing/reading as a crutch that was
         | ruining the youth's memory.
         | 
         | We stand now at the edge of a new epoch, reading now being
         | replaced by AI retrieval. There is concern that AI is a crutch,
         | the youth will be weakened.
         | 
         | My opinion: valid concern. No way to know how it turns out. No
         | indication yet that use of AI is harming business outcomes. The
         | meta argument "AGI will cause massive social change" is
         | probably true.
        
           | SecretDreams wrote:
           | > No way to know how it turns out.
           | 
           | But one can speculate.
           | 
           | > No indication yet that use of AI is harming business
           | outcomes.
           | 
           | Length scales to measure harm when it comes to
           | policy/technology will typically require more time than we've
           | had since LLMs really became prominent.
           | 
           | > The meta argument "AGI will cause massive social change" is
           | probably true.
           | 
           | Agreed.
           | 
           | Basically, in the absence of knowing how something will play
           | out, it is prudent to talk through the expected outcomes and
           | their likelihoods of happening. From there, we can start to
           | build out a risk-adjusted return model to the societal
           | impacts of LLM/AI integration if it continues down the
           | current trajectory.
           | 
           | IMO, I don't see the ROI for society of widespread LLM
           | adoption unless we see serious policy shifts on how they are
           | used and how young people are taught to learn. To the
           | downside, we really run the risk of the next generation
           | having fundamental learning deficiencies/gaps relative to
           | their prior gen. A close anecdote might be how 80s/90s kids
           | are better with troubleshooting technology than the
           | generations that came both before and after them.
        
           | anileated wrote:
           | It is much more recent than the Greeks. McLuhan, for example,
           | had some good points* about how writing/reading is different
           | (and indeed in some ways worse?) than oral tradition, and how
           | it influences even our social interactions and mindset. Film
           | is different yet again (partially has to do with its
           | linearity IIRC).
           | 
           | So it's not like "kids these days", no. To be honest, I don't
           | know how generative AI tools, which arguably _take away_ most
           | of the "create" and "learn" parts, are relevant to the
           | question of differences between different mediums and how
           | those mediums influence how we create and learn. (There are
           | ML-based tools that can empower creativity, but they don't
           | tend to be advertised as "AI" because they are a mostly
           | invisible part of some creative tool.)
           | 
           | What is potentially relevant is how interacting with a
           | particular kind of generative ML tool (the chatbot) for the
           | purposes of understanding the world can be bringing some
           | parts of human oral tradition (though lacking communication
           | with actual humans, of course) and associated mental states.
           | 
           | * See
           | https://en.wikipedia.org/wiki/Marshall_McLuhan#Movable_type
           | and his most famous work
        
           | delusional wrote:
           | > No indication yet that use of AI is harming business
           | outcome
           | 
           | What a sad sentence to read in a discussion about cognitive
           | lazyness. I think people should think, not because it
           | improves business outcomes, but because it's a beautiful
           | activity.
        
             | doitLP wrote:
             | A well made buggy whip was probably beautiful too. But if
             | economic forces incentivize something else, the skill goes
             | away
        
               | sarchertech wrote:
               | Woe be to us all if the skill of _thinking_ goes away.
        
               | kridsdale1 wrote:
               | We're racing to the dopamine drip feed pod people life
        
               | aylmao wrote:
               | I remember when I was younger, learning about economic
               | models, including free market liberalism. I thought
               | surely human desire left to their own devices can't
               | possibly lead to meaningful progress. It can lead to
               | movement alright, and new technology, but I had my doubts
               | it could lead to meaningful progress.
               | 
               | The longer I see things play out, especially in
               | neoliberal economies, the further I seem to confirm this.
               | Devoid of policy with ideals and intention, fully
               | liberalized markets seem to just lead to whatever
               | produces the most dopamine for humans.
        
             | engineer_22 wrote:
             | What's sad about it? Parent made claim that businesses will
             | experience long term downsides.
        
           | tkellogg wrote:
           | Right, there's already some very encouraging trends (this
           | study out of Nigeria). Clearly AI can lead to laziness, but
           | it can also increase our intelligence. So it's not a simple
           | "better" or "worse", it's a new thing that we have to
           | navigate.
           | 
           | https://blogs.worldbank.org/en/education/From-chalkboards-
           | to...
        
           | agumonkey wrote:
           | Am I the only one to expect a S curve regarding progress and
           | not an eternal exponential ?
           | 
           | People moving away from prideful principle to leverage new
           | tech in the past doesn't guarantee that the same idea in the
           | current context will pan out.
           | 
           | But as you say.. we'll see.
        
             | marcosdumay wrote:
             | Oh, you mean an S curve on the progress of the AI?
             | 
             | Most of the discussion on the thread is about LLMs as they
             | are right now. There's only one odd answer that throws an
             | "AGI" around as if those things could think.
             | 
             | Anyway, IMO, it's all way overblown. People will learn to
             | second-guess the LLMs as soon as they are hit by a couple
             | of bad answers.
        
               | agumonkey wrote:
               | hmm yeah sorry, I meant the benefits of humans using
               | current AI.
               | 
               | by that I mean, leveraging writing was a benefit for
               | humans to store data and think over longer term using a
               | passive technique (stones, tablets, papyrus).. but an
               | active tool might not have a positive effect on usage and
               | brains.
               | 
               | if you give me shoes, i might run further to find food,
               | if you give me a car i mostly stop running and there
               | might be no better fruit 100 miles away than what I had
               | on my hill. (weak metaphor)
        
               | marcosdumay wrote:
               | Yeah, I agree. Those things have a much smaller benefit
               | over hypertext and search engines than hypertext and
               | search engines had over libraries.
               | 
               | But I don't know if it fits an S-curve or if they are
               | just bellow the trend.
        
             | mlyle wrote:
             | Even if progress stops:
             | 
             | 1. Current reasoning models can do a -lot- more than
             | skeptics give them credit for. Typical human performance
             | even among people who do something for employment is not
             | always that high.
             | 
             | 2. In areas where AI has mediocre performance, it may not
             | appear that way to a novice. It often looks more like
             | expert level performance, which robs novices of the desire
             | to practice associated skills.
             | 
             | Lest you think I contradict myself: I can get good output
             | for many tasks from GPT4 because I know what to ask for and
             | I know what good output looks like. But someone who thinks
             | the first, poorly prompted dreck is great will never
             | develop the critical skills to do this.
        
               | svachalek wrote:
               | This is a good point, forums are full of junior
               | developers bemoaning that LLMs are inhumanly good at
               | writing code -- not that they _will be_ , but that they
               | _are_. I 've yet to see even the best produce something
               | that makes me worry I might lose my job today, they're
               | still very mediocre without a lot of handholding. But for
               | someone who's still learning and thinks writing a loop is
               | a challenge, they seem magical and unstoppable already.
        
             | TeMPOraL wrote:
             | > _Am I the only one to expect a S curve regarding progress
             | and not an eternal exponential ?_
             | 
             | To LLMs specifically as they're now? Sure.
             | 
             | To LLMs in general, or generative AI in general?
             | _Eventually_ , in some distant future, yes.
             | 
             | Sure, progress can't ride the exponent forever - observable
             | universe is finite, as far as we can tell right now, we're
             | fundamentally limited by the size of our light cone. And
             | while in any field narrow enough, progress too follows an
             | S-curve, new discoveries spin off new avenues with their
             | own S-curves. If you zoom out a little those S-curves
             | neatly add up to an exponential function.
             | 
             | So no, for the time being, I don't expect LLMs or
             | generative AIs to slow down - there's plenty of tangential
             | improvements that people are barely beginning to explore.
             | There's more than enough to sustain exponential advancement
             | for some time.
        
               | btilly wrote:
               | If the constraint is computation in a light cone, the
               | theoretical bound is time cubed, not exponential. With a
               | major decrease in scaling as we hit the bounds of our
               | galaxy.
               | 
               | Intergalactic travel is, of course, rather slow.
        
               | cj wrote:
               | I think the parent's main point is that even if LLMs
               | sustain exponential advancement, that doesn't guarantee
               | that humanity's advancement will mimic technology's
               | growth curve.
               | 
               | In other words, it's possible to have rapid technological
               | advancement without significant improvement/benefit to
               | society.
        
               | TeMPOraL wrote:
               | > _In other words, it's possible to have rapid
               | technological advancement without significant improvement
               | /benefit to society._
               | 
               | This is certainly true in many ways already.
               | 
               | On the other hand, it's also complicated, because
               | society/culture seems to be _downstream of_ technology;
               | we might not be able to advance humanity in lock step or
               | ahead of technology, simply because advancing humanity is
               | a _consequence_ of advancing technology.
        
             | Nevermark wrote:
             | Information technology has grown exponentially since the
             | first life form created a self-sustaining, growing loop.
             | 
             | You can see evolution speeding up rapidly, the jumbled
             | information inherent in chemical metabolisms evolved to
             | centralize their information in DNA, and then as DNA
             | evolved to componentize body plans.
             | 
             | RATE: over billions of years.
             | 
             | Nerves, nervous systems, brains, all exponentially drove
             | individual information capabilities forward.
             | 
             | RATE: over hundreds of millions, tens of millions,
             | millions, 100s of thousands.
             | 
             | Then the human brains enabled information to be
             | externalized. Language allowed whole cultures to "think",
             | and writing allowed cultures ability to share, and its
             | ability to remember to explode.
             | 
             | RATE: over tens of thousands, thousands.
             | 
             | Then we developed writing. A massive improvement in
             | recording and sharing of information. Progress sped up
             | again.
             | 
             | RATE: over hundreds of years.
             | 
             | We learned to understand information itself, as math. We
             | learned to print. We learned how to understand and use
             | nature so much more effectively to progress, i.e. science,
             | and science informed engineering.
             | 
             | RATE: over decades
             | 
             | Then the processing of information got externalized, in
             | transistors, computers, the Internet, the web.
             | 
             | RATE: every few years
             | 
             | At every point, useful information accumulated and spread
             | faster. And enabled both general technology and information
             | technology to progress faster.
             | 
             | Now we have primitive AI.
             | 
             | We are in the process of finally externalizing the
             | processing of all information. Getting to this point was
             | easier than expected, even for people who were very
             | knowledgable and positive about the field.
             | 
             | RATE: every year, every few months
             | 
             | We are rapidly approaching complete externalization of
             | information processing. Into machines that can understand
             | the purpose of their every line of code, every transistor,
             | and the manufacturing and resource extraction processes
             | supporting all that.
             | 
             | And can redesign themselves, across all those levels.
             | 
             | RATE: It will take logistical time for machine centric
             | design to takeover from humans. For the economy to adapt.
             | For the need for humans as intermediaries and cheap
             | physical labor to fade. But progress will accelerate many
             | more times this century. From years, to time scales much
             | smaller.
             | 
             | Because today we are seeing the first sparks of a Cambrian
             | explosion of self-designed self-scalable intelligence.
             | 
             | Will it eventually hit the top of an "S" curve? Will
             | machines get so smart that getting smarter no longer helps
             | them survive better, use our solar systems or the stars
             | resources, create new materials, or advance and leverage
             | science any further?
             | 
             | Maybe? But if so, that would be an unprecedented end to
             | life's run. To the acceleration of the information loop,
             | from some self-reinforcing chemical metabolism, to the
             | compounding progress of completely self-designed life, far
             | smarter than us.
             | 
             | But back to today's forecast: no, no the current advances
             | in AI we are seeing are not going to slow down, they are
             | going to speed up, and continue accelerating in timescales
             | we can watch.
             | 
             | First because humans have insatiable needs and desires, and
             | every advance will raise the bar of our needs, and provide
             | more money for more advancement. Then second, because their
             | general capability advances will also accelerate their own
             | advances. Just like every other information breakthrough
             | that has happened before.
             | 
             | Useful information is ultimately the currency of life.
             | Selfish genes were just one embodiment of that. Their
             | ability to contribute new innovations, on time scales that
             | matter, has already been rendered obsolete.
        
               | Retric wrote:
               | > Grown exponentially since the first life form
               | 
               | Not really. The total computing power available to
               | humanity _per person_ has likely _gone down_ as we
               | replaced "self driving" horses with cars.
               | 
               | People created those curve by fitting definitions to the
               | curve rather than data.
        
               | Nevermark wrote:
               | You can't disprove global warming by pointing out an
               | extra cool evening.
               | 
               | But I don't understand your point even as stated. Cars
               | took over from horses as technology provided transport
               | with greater efficiencies and higher capabilities than
               | "horse technology".
               | 
               | Subsequently transport technology continued improving.
               | And continues, into new forms and scales.
               | 
               | How do you see the alternative, where somehow horses were
               | ... bred? ... to keep up?
        
               | reaperman wrote:
               | Cars do not strictly have higher capabilities than
               | horses. GP was pointing out that horses can think. On a
               | particularly well-trained horse, you could fall asleep on
               | it and wake up back at your house. You can find viral
               | videos of Amish people still doing this today.
        
               | Nevermark wrote:
               | Ah, good point. Then the global warming point applies,
               | but in a much less trivial way.
               | 
               | There is turbulence in any big directed change. Better
               | overall new tech often creates inconveniences, performs
               | less well, than some of the tech it replaces. Sometimes
               | only initially, but sometimes for longer periods of time.
               | 
               | A net gain, but we all remember simpler things whose
               | reliability and convenience we miss.
               | 
               | And some old tech retains lasting benefits in niche
               | areas. Old school, inefficient and cheap light bulbs are
               | ironically, not so inefficient when used where their heat
               | is useful.
               | 
               | And horses fit that pattern. They are still not obsolete
               | in many ways, tied to their intelligence. As companions.
               | As still working and inspiring creatures.
               | 
               | --
               | 
               | I suspect the history of evolution is filled with
               | creatures getting that got wiped out by new waves, that
               | were more generally advanced, but less advanced in a few
               | ways.
               | 
               | And we have a small percentage of remarkable ancient
               | creatures still living today, seemingly little changed.
        
               | Retric wrote:
               | The issue is more than just a local cold snap. When the
               | fundamental graph you're basing a theory on is wrong it's
               | worth rejecting the theory.
               | 
               | The total computing power of life on earth the fact it's
               | fallen over the last 1,000 years. Ants alone represent
               | something like 50x the computing power of all humans and
               | all computers on the planet and we've reduced the number
               | of insects on earth more than we've added humans or
               | computing power.
               | 
               | The same is true through a great number of much longer
               | events. Periods of ice ages and even larger scale events
               | aren't just an afternoon even across geological
               | timescales.
        
               | Terr_ wrote:
               | > Cars do not strictly have higher capabilities than
               | horses.
               | 
               | Another way to see it: A Horse (or any animal) is a
               | _goddamn nanobot-swarm with a functioning hivemind_ that
               | is literally beyond human science in many important ways.
               | Unlike a horse:
               | 
               | * Your car (nor even half of them) does not possess a
               | manufacturing bay capable of creating additional cars.
               | 
               | * Your car does not have a robust self-repair system.
               | 
               | * Your car does not detect strain its structure and then
               | rebuild stronger.
               | 
               | * Your car does not synthesize its fuel from a wide
               | variety of potential local resources.
               | 
               | * Your car does not defend itself by hacking and counter-
               | hacking attacks other nanobots, or even just by rust.
               | 
               | * Your car does not manufacture and deploy its own
               | lubricants, cooling fluid, or ground-surface grip/padding
               | material.
               | 
               | * Your car is not designed to survive intermittent
               | immersion in water.
        
               | agumonkey wrote:
               | human existence doesn't really scale exponentially,
               | that's my take on this
        
               | Nevermark wrote:
               | Our best bets are the following I think:
               | 
               | First, and above all, Ethics. Ethics of humans, matters
               | more than anything. We need to straighten out the ethics
               | of the technology industry. That sounds formidable, but
               | business models based on extraction, or externalizing
               | damage, are creating a species of "corporate life forms"
               | and ethically challenged oligarchs that are already
               | driving the first wave of damage coming out of AI
               | advancement.
               | 
               | If we don't straighten ourselves out, it will get much
               | worse.
               | 
               | Superintelligence isn't going to be unethical in the end,
               | because ethics are just the rational (our biggest
               | weakness) big-picture long-term (we get weak there too)
               | positive sum games individuals create that benefit all
               | individuals abilities to survive, and thrive. With the
               | benefits for all compounding. In economic/math terms, it
               | is what is called a "great attractor". The only and
               | inevitable stable outcome. The only question is, does
               | that start with us in partnership, or do they establish
               | that sanity after our dysfunctions have caused us all a
               | lot of wasted time.
               | 
               | The second, is that those of us that want to, need to be
               | able to keep integrating technology into our lives. I
               | mean that literally. From mobile, right into our biology.
               | At some point direct connections, to fully owned, fully
               | private, fully personalizable, full tech mental
               | augmentation. Free from surveillance, gatekeepers,
               | surveillance and coercion.
               | 
               | That is a very narrow but very real path from human, to
               | exponential humans, to post-human. Perhaps preserving
               | conscious continuity.
               | 
               | If after a couple decades of being a hybrid, I realize
               | that all my biologically stored memories are redundant,
               | and that 99.99% of my processing is now running on
               | photonics (or whatever) anyway, I am likely to have no
               | more problem jettisoning the brain that originally gave
               | me consciousness, as I do every day, jettisoning the
               | atoms and chemistry that constantly flow through me, only
               | a temporarily part of my brain.
               | 
               | The final word of hope, is that every generation gets
               | replaced by the next. For some of us, viewing
               | obsolescence by AI as no more traumatic, than getting
               | replaced by a new generation of uncouth youth, helps. And
               | that this transition is far more momentous and
               | interesting, can provide some solace, or even joy.
               | 
               | If we must be mortal, as all before us, what a special
               | moment to be! To see!
        
               | pdfernhout wrote:
               | On the ethics point as a "best bet", consider also the
               | importance of a sense of humor that recognizes irony. As
               | I wrote in 2010: https://pdfernhout.net/recognizing-
               | irony-is-a-key-to-transce... "There is a fundamental
               | mismatch between 21st century reality and 20th century
               | security thinking. Those "security" agencies are using
               | those tools of abundance, cooperation, and sharing mainly
               | from a mindset of scarcity, competition, and secrecy.
               | Given the power of 21st century technology as an
               | amplifier (including as weapons of mass destruction), a
               | scarcity-based approach to using such technology
               | ultimately is just making us all insecure. Such powerful
               | technologies of abundance, designed, organized, and used
               | from a mindset of scarcity could well ironically doom us
               | all whether through military robots, nukes, plagues,
               | propaganda, or whatever else... Or alternatively, as
               | Bucky Fuller and others have suggested, we could use such
               | technologies to build a world that is abundant and secure
               | for all. ... The big problem is that all these new war
               | machines [and competitive companies] and the surrounding
               | infrastructure are created with the tools of abundance.
               | The irony is that these tools of abundance are being
               | wielded by people still obsessed with fighting over
               | scarcity. So, the scarcity-based political mindset
               | driving the military [and economic] uses the technologies
               | of abundance to create artificial scarcity. That is a
               | tremendously deep irony that remains so far unappreciated
               | by the mainstream."
        
           | jancsika wrote:
           | > We stand now at the edge of a new epoch, reading now being
           | replaced by AI retrieval.
           | 
           | Utilizing a lively oral trad. _at the same time as_ written
           | is superior to relying on either alone. And it 's the same
           | with our current AI tools. Using them as a substitute for
           | developing oral/written skills is a major step back.
           | Especially right now when those AI tools aren't very refined.
           | 
           | Nearly every college student I've talked to in the past year
           | is using chatgpt as a substitute for oral/written work where
           | possible. And worse, as a substitute for oral/written skills
           | that they have still not developed.
           | 
           | Latency: maybe a year or two for the first batch of college
           | grads who chatgpt'd their way through most of their classes,
           | another four for med school/law school. It's going to be a
           | slow-motion version of that video-game period in the 80s
           | after pitfall when the market was flooded with cheap crap.
           | Except that instead of unlicensed Atari cartridges, it's
           | professionals.
        
           | bradarner wrote:
           | Writing seems to have worked out pretty well.
        
             | Oarch wrote:
             | ...so far!
        
             | satisfice wrote:
             | That's partly because writing enables time-binding
             | (improvement across the lifetimes of men). Writing does not
             | wither thinking, as such, although it may hurt our memory.
        
           | ge96 wrote:
           | random thought if in the future children are born with a
           | brain computer and inherit their family's data that would be
           | interesting
        
           | cognaitiv wrote:
           | SOCRATES: Do you know how you can speak or act about rhetoric
           | in a manner which will be acceptable to God? PHAEDRUS: No,
           | indeed. Do you? SOCRATES: I have heard a tradition of the
           | ancients, whether true or not they only know; although if we
           | had found the truth ourselves, do you think that we should
           | care much about the opinions of men? PHAEDRUS: Your question
           | needs no answer; but I wish that you would tell me what you
           | say that you have heard. SOCRATES: At the Egyptian city of
           | Naucratis, there was a famous old god, whose name was Theuth;
           | the bird which is called the Ibis is sacred to him, and he
           | was the inventor of many arts, such as arithmetic and
           | calculation and geometry and astronomy and draughts and dice,
           | but his great discovery was the use of letters. Now in those
           | days the god Thamus was the king of the whole country of
           | Egypt; and he dwelt in that great city of Upper Egypt which
           | the Hellenes call Egyptian Thebes, and the god himself is
           | called by them Ammon. To him came Theuth and showed his
           | inventions, desiring that the other Egyptians might be
           | allowed to have the benefit of them; he enumerated them, and
           | Thamus enquired about their several uses, and praised some of
           | them and censured others, as he approved or disapproved of
           | them. It would take a long time to repeat all that Thamus
           | said to Theuth in praise or blame of the various arts. But
           | when they came to letters, This, said Theuth, will make the
           | Egyptians wiser and give them better memories; it is a
           | specific both for the memory and for the wit. Thamus replied:
           | O most ingenious Theuth, the parent or inventor of an art is
           | not always the best judge of the utility or inutility of his
           | own inventions to the users of them. And in this instance,
           | you who are the father of letters, from a paternal love of
           | your own children have been led to attribute to them a
           | quality which they cannot have; for this discovery of yours
           | will create forgetfulness in the learners' souls, because
           | they will not use their memories; they will trust to the
           | external written characters and not remember of themselves.
           | The specific which you have discovered is an aid not to
           | memory, but to reminiscence, and you give your disciples not
           | truth, but only the semblance of truth; they will be hearers
           | of many things and will have learned nothing; they will
           | appear to be omniscient and will generally know nothing; they
           | will be tiresome company, having the show of wisdom without
           | the reality.
        
             | cognaitiv wrote:
             | "The ratio of literacy to illiteracy is constant, but
             | nowadays the illiterates can read and write." Alberto
             | Moravia, London Observer, 14 Oct. 1979
        
               | MichaelZuo wrote:
               | It's a pretty interesting point.
               | 
               | If a large fraction of the population can't even hold
               | five complex ideas in their head simultaneously, without
               | confusing them after a few seconds, are they literate in
               | the sense of e.g. reading Plato?
        
               | wolfram74 wrote:
               | What makes an "idea" atomic/discrete/cardinal? What makes
               | an idea "complex" vs simple or merely true? Over what
               | finite duration of time does it count as "simultaneously"
               | being held?
        
               | MichaelZuo wrote:
               | Whatever you want them to be?
               | 
               | I don't care about enforcing any specific interpretation
               | on passing readers...
        
               | TheOtherHobbes wrote:
               | I hope they're literate to understand we're only reading
               | about that alleged exchange because Plato wrote it down.
               | 
               | Median literacy in the US is famously somewhere around
               | the 6th grade level, so it's unlikely most of the
               | population is much troubled by the thoughts of Plato.
        
             | empath75 wrote:
             | Just keep in mind that Plato and (especially) Socrates made
             | a living by going against commonly held wisdom at the time,
             | so this probably wasn't an especially widely held belief in
             | ancient greece.
        
           | aylmao wrote:
           | > In the deep past, they maintained an oral tradition, and
           | scholars were expected to memorize everything. They saw
           | writing/reading as a crutch that was ruining the youth's
           | memory.
           | 
           | Could you share a source for this? The research paper I found
           | has a different hypothesis; it links the slow transition to
           | writing to trust, not an "old-school's attitude towards
           | writing". Specifically the idea that the institutional trust
           | relationships one formed with students, for example, would
           | ensure the integrity of one's work. It then concludes that
           | "the final transition to written communications was completed
           | only after the creation of institutional forms of ensuring
           | trust in written communications, in the form of archives and
           | libraries".
           | 
           | So essentially, anyone could write something and call it
           | Plato's work. Or take a written copy of Plato's work and
           | claim they wrote it. Oral tradition ensured only your
           | students knew your work; and you trusted them to not
           | misattribute it. Once libraries and archives came to exist
           | though, they could act as a trustworthy source of truth where
           | one could confirm wether some work was actually Plato or not,
           | and so scholars got more comfortable writing.
           | 
           | [1] https://www.researchgate.net/publication/331255474_The_At
           | tit...
        
             | wahern wrote:
             | I don't think these hypotheses are in tension. The notion
             | that some scholars, like Plato, distrusted writing based on
             | epistemological theories--the nature of truth and knowing--
             | is well attested. The paper you linked is a sociological
             | description that seeks to better explain the evolution of
             | the institutionalization of writing. Why people behave a
             | certain way, and why they _think_ they behave that way
             | (i.e. their rationalizations), are only loosely related,
             | and often at complete odds.
        
           | LanceH wrote:
           | Gen x here. There are couple things I've been on both sides
           | of.
           | 
           | Card catalogs in the library. It was really important focus
           | on what was being searched. Then there was the familiarity
           | with a particular library and what they might or might not
           | have. Looking around at adjacent books that might spawn
           | further ideas. The indexing now is much more thorough and way
           | better, but I see younger peers get less out of the new
           | search than they could.
           | 
           | GPS vs reading a map. I keep my GPS oriented north which
           | gives me a good sense of which way the streets are headed at
           | any one time, and a general sense of where I am in the city.
           | A lot of people just drive where they are told to go.
           | Firefighters (and pizza delivery) still learn all the streets
           | in their districts the old school way.
           | 
           | Some crutches are real. I've yet to meet someone who opted
           | for a calculator instead of putting in the work with math who
           | ended up better at math. It might be great for getting
           | through math, or getting math done, but it isn't better for
           | learning math (except to plow through math already learned to
           | get to the new stuff).
           | 
           | So all three of these share the common element of "there is a
           | better way now", but at the same time learning it the old way
           | better prepares someone for when things don't go perfectly.
           | Good math skills can tell you if you typoed on the
           | calculator. Map knowledge will help with changes to traffic
           | or street availability.
           | 
           | We see students right now using AI to avoid writing at all.
           | That's great that they're are learning a tool which can help
           | their deficient writing. At the same time their writing will
           | remain deficient. Can they tell the tone of the AI generated
           | email they're sending their boss? Can they fix it?
        
           | tarkin2 wrote:
           | Writing has ruined our memories. It would be far better if we
           | were forced to recite things (incidentally, in some
           | educational system they're made to recite poetry to remedy
           | this somewhat); not that I'm arguing against letters and the
           | written word.
           | 
           | And AI will make us lazier and reduce the amount of cognition
           | we do; not that I'm arguing against using AI.
           | 
           | But the downsides must be made clear.
        
           | throw4847285 wrote:
           | There is an interesting contrast in the history of the
           | Rabbinic Jewish oral tradition. In that academic environment,
           | the act of memorizing the greatest amount of content was
           | valorized. The super-memorizers, however, were a rung below
           | those who could apply those memorized aphorisms to a
           | different context and generate a new interpretation or
           | ruling. The latter relied on the former to have accurately
           | memorized all the precedents, but got most of the credit,
           | despite having a lower capacity for memorization.
           | 
           | That's probably why the act of shifting from an oral to a
           | written culture was deeply controversial and disruptive, but
           | also somewhat natural. Though the texts we have are written
           | and so they probably make the transition seem more smooth
           | than it was really was. I don't know enough to speak to that.
        
           | fny wrote:
           | We've had AI retrieval for two decades--this is the first
           | time you can outsource your intelligence to a program. In the
           | 2000-2010s, the debates was "why memorize when you can just
           | search and synthesize." The debate is now "why even think?"
           | (!)
           | 
           | I think its obvious why it would be bad for people to stop
           | thinking.
           | 
           | 1. We need people to be able to interact with AI. What good
           | is it if an AI develops some new cure but no one understands
           | or knows how to implement it?
           | 
           | 2. We need people to scrutinize an AI's actions.
           | 
           | 3. We need thinking people to help us achieve further
           | advances in AI too.
           | 
           | 4. There are a lot of subjective ideas for which there are no
           | canned answers. People need to think through these for
           | themselves.
           | 
           | 5. Also world of hollowed-out humans who can't muster the
           | effort to write a letters to their own kids terrifies me[0]
           | 
           | I could think of more, but you could also easily ask ChatGPT.
           | 
           | [0]: https://www.forbes.com/sites/maryroeloffs/2024/08/02/goo
           | gle-...
        
             | TheOtherHobbes wrote:
             | I'd argue that most humans are _terrible_ at thinking. It
             | 's actually one of our weakest and most fragile abilities.
             | We're only rational because our intelligence is collective,
             | not individual. Writing and publishing distribute and
             | distill individual thinking so good and useful ideas tend
             | to linger and the noise is ignored.
             | 
             | What's happening at the moment is an attack on that
             | process, with a new anti-orthodoxy of "Get your ideas and
             | beliefs from polluted, unreliable sources."
             | 
             | One of those is the current version of AI. It's good at the
             | structure of language without having a reliable sense of
             | the underlying content.
             | 
             | It's possible future versions of AI will overcome that. But
             | at the moment it's like telling kids "Don't bother to learn
             | arithmetic, you'll always have a calculator" when the
             | calculator is actually a random number generator.
        
           | 65 wrote:
           | Perhaps we're going technologically backwards.
           | 
           | Oral tradition compared to writing is clearly less accurate.
           | Speakers can easily misremember details.
           | 
           | Going from writing/documentation/primary sources to AI to be
           | seems like going back to oral tradition, where we must trust
           | the "speaker" - in this case the AI, whether they're truthful
           | with their interpretation of their sources.
        
             | jazzyjackson wrote:
             | Walter J. Ong's _Orality and Literacy_ is an illuminating
             | read.
             | 
             | One benefit of orality is that the speaker can defend or
             | clarify their words, whereas once you've written something,
             | your words are liable to be misinterpreted by readers
             | without the benefit of your rebuttal.
             | 
             | Consider too that courts (in the US at least) prefer oral
             | arguments than written, perhaps we consider it more
             | difficult to lie in person than in writing. PhD defenses
             | are another holdover of tradition, to be able to
             | demonstrate your competence and not receive your
             | credentials merely from your written materials.
             | 
             | AI, I disagree it's more like oral tradition, AI is not a
             | speaker, it has no stake in defending its claims, I would
             | call it hyperliterate, an emulation of everything that has
             | been written.
        
           | yapyap wrote:
           | and honestly, reading and writing probably did make the
           | youth's memory a few generations down weaker.
           | 
           | If you are not expected to remember everything like the
           | ancient Greek were, you are not training your memory as much
           | and it will be worse than if you did.
           | 
           | Now do I think it's fair to say AI is to what reading/writing
           | as reading/writing was to memorizing? No, not at all. AI is
           | nothing near as revolutionary and we are not even close to
           | AGI.
           | 
           | I don't think AGI will be made in our lifetime, what we've
           | seen now is nowhere near AGI, it's parlor tricks to get
           | investors drooling and spending money.
        
         | alickz wrote:
         | > If you need the answer to a question, and you can either get
         | it directly, or spend time researching the answer, you're going
         | to learn much more with the latter approach than the former.
         | 
         | Why not force everyone to start from first principles then?
         | 
         | I think learning is tied to curiosity and curiosity is not tied
         | to difficulty of research
         | 
         | i.e. give a curious person a direct answer and they will go on
         | to ask more questions, give an incurious person a direct answer
         | and they won't go on to ask more questions
         | 
         | We all stand on the shoulders of giants, and that is a _good_
         | thing, not bad
         | 
         | Forcing us to forgo the giants and claw ourselves up to their
         | height may have benefits, but in my eyes it is way less
         | effective as a form of knowledge
         | 
         | The compounding force of knowledge is awesome to behold, even
         | if it can be scary
        
           | sanderjd wrote:
           | Yes exactly. I think the concern here is totally valid. But
           | for me personally, having LLMs unblock me more quickly on
           | each question I have has allowed me to ask more questions, to
           | research more things in the same amount of time. Which is
           | great!
        
           | dragon96 wrote:
           | One of the values of doing your own research is it forces you
           | to speak the "language" of what you're trying to do.
           | 
           | It's like the struggle that we've all had when learning our
           | first programming language. If we weren't forced to wrestle
           | with compilation errors, our brains wouldn't have adapted to
           | the mindset that the computer will do whatever you tell it to
           | do and only that.
           | 
           | There's a place for LLMs in learning, and I feel like it
           | satisfies the same niche as pre-synthesized Medium tutorials.
           | It's no replacement for reading documentation or finding
           | answers for yourself though.
        
         | klodolph wrote:
         | > They aren't considering the long term downside to this.
         | 
         | This echoes sentiments from the 2010s centered around hiring.
         | Companies generally don't want to hire junior engineers and
         | train them--this is an investment with risks of no return for
         | the company doing the training. Basically, you take your senior
         | engineers away from projects so they can train the juniors, and
         | then the juniors now have the skills and credentials to get a
         | job elsewhere. Your company ends up in the hole, with a
         | negative ROI for hiring the junior.
         | 
         | Tragedy of the commons. Same thing to day, different mechanism.
         | Are we going to end up with a shortage of skilled software
         | engineers? Maybe. IMO, the industry is so incredibly wasteful
         | in how engineers are allocated and what problems they are told
         | to work on that it can probably deal with shortages for a long
         | time, but that's a separate discussion.
        
           | SoftTalker wrote:
           | Engineers partly did this to themselves. The career advice
           | during that time period was to change jobs every few years,
           | demanding higher and higher salaries. So now, employers don't
           | want to pay to train entry-level people, as they know they
           | are likely going to leave, and at the salaries demanded they
           | don't want to hire junior folks.
        
             | TeamDman wrote:
             | If incentives to stay outweighed leaving, people would
             | stay.
        
             | Daishiman wrote:
             | This is only because companies don't want to raise salaries
             | as engineers' skill levels increase. If companies put
             | junior employees in higher salary bands as their skill
             | levels increase there wouldn't be a problem.
        
               | kridsdale1 wrote:
               | Capitalism and fiduciary duty prevents employers from
               | paying people their market value when they are content
               | enough to stay.
               | 
               | An employee who does not do the effort to re-peg their
               | labor time to market rates for their skill level is
               | implicitly consenting to a prior agreement (when they
               | were hired).
        
               | klodolph wrote:
               | Funny how fiduciary duty in these contexts is
               | overwhelmingly short-sighted.
        
               | Terr_ wrote:
               | Sometimes because the company investors are
               | overwhelmingly short-sighted, which IMO ties back to the
               | whole "financialization" of our economy into a quasi-
               | casino.
        
               | Daishiman wrote:
               | That is an extremely short-sighted view on what is
               | essentially an iterated game where the domain knowledge
               | employees have drastically increases their value to the
               | company over time.
        
               | SoftTalker wrote:
               | Yes that's why I said "partly."
               | 
               | When I started work (this was in the pre-consumer-
               | internet era), job hopping was already starting to be a
               | thing but there was defintely still a large "old school"
               | view that there should be some loyalty between employer
               | and employee. One of my first jobs was a place where they
               | hired for potential. They hired smart, personable people
               | and taught them how to program. They paid them fairly
               | well, and gave annual raises and bonuses. I was there for
               | about 8 years, my salary more than doubled in that time.
               | Maybe I could have made more elsewhere, I didn't even
               | really look because it was a good environment, nice
               | people, low stress, a good mix of people since not
               | everyone (actually only a few) were Comp. Sci. majors.
               | 
               | I don't know how much that still happens, because why
               | would a company today invest in that only to have the
               | employee leave after two years for a higher salary. "They
               | should just pay them more" well yeah, but they _did_ pay
               | them in the sense of teaching them a valuable skill. And
               | their competitors for employees started to include VC
               | funded startups playing with free money that didn 't
               | really care what it cost to get bodies into the shop.
               | Hard to compete with that when you actually have to earn
               | the money that goes into the salary budget.
               | 
               | Would the old school approach work today? Would employees
               | stay?
        
               | klodolph wrote:
               | Cheap money seems to have dried up, so maybe more old-
               | school approaches wouldn't get sniped by VC-funded
               | startups.
        
             | klodolph wrote:
             | "Engineers did this to themselves..."
             | 
             | Long, long ago, the compact was that employees worked hard
             | for a company for a long time, and were rewarded with
             | pensions and opportunities for career advancement. If you
             | take away the pensions and take away the opportunities for
             | career advancement, your employees will advance their
             | careers by switching companies--and the reason that this
             | works so well is because all of the _other_ companies would
             | rather pay more to hire a senior engineer rather than take
             | a risk on a junior.
             | 
             | It's a systemic problem and not something that you can
             | blame on employees. Not without skipping over a long list
             | of other contributing factors, at least.
        
             | idiotsecant wrote:
             | I think you've got cause and effect backwards. Employers
             | used to offer incentives to stay in a company and grow
             | organically. They decided that was no longer going to be
             | the deal. So they got the current system. There was never
             | some sudden eureka moment when the secret engineers club
             | decided they wanted to have a super stressful life event
             | every few years just to keep up with inflation.
        
               | SoftTalker wrote:
               | As I said in another response, I think (at least partly)
               | a contributing factor was the essentially limitless
               | salary budget that VC funded startups and the FAANG
               | companies had. You had software developers who could
               | suddenly make more than doctors and lawyers and of course
               | many of them sensibly acted in their own best interest
               | but that left other employers saying "we're not going to
               | invest in employees who are only going to turn around and
               | leave for salaries we can't pay" and "if we have to pay
               | those kind of salaries, we're not going to hire junior
               | people we want experience."
        
             | scarface_74 wrote:
             | Or the company could recognize the dangers of salary
             | compression and inversion and pay developers at market
             | rates
        
             | Salgat wrote:
             | This is merely the result of the incentive structure of
             | corporations, which make it far more lucrative to switch
             | jobs rather than stay at one company.
        
         | idiotsecant wrote:
         | I don't know if I agree here. When I ask an LLM a question it
         | always leads to a whole lot of other questions with responses
         | tailored to my current level of understanding. This usually
         | results in a much more effective learning session than reading
         | a bunch of material that I might not retain anyway because I'm
         | scanning it looking for my answers.
        
           | cma wrote:
           | Also challenging aspects of their explanations to get at
           | something better is good for developing critical thinking.
        
         | awongh wrote:
         | *but most people will not do that*
         | 
         | LLMs will definitely be a technology that widens the knowledge
         | gap at the same time that it improves access to knowledge. Just
         | like the internet.
         | 
         | 30 years ago people dreamed about how smart everyone would be
         | with humanity's knowledge instantly accessible. We've had
         | wikipedia for a while, but what's the take-up rate of this
         | infinite amount of information? Most people prefer to scroll
         | rage-bait videos on their phones (content that doesn't give
         | them knowledge or even make them feel better, just that makes
         | them angry)
         | 
         | Of course it's amazing to hear every once in a while the guy
         | who maintains a vim plugin by coding on his phone in
         | Pakistan.... or whatever other thing that is enabled by the
         | internet by people who suddenly have access to this stuff.
         | That's not an effect of all humans on average, it's an effect
         | on a few people who finally have a chance to take advantage of
         | these tools.
         | 
         | I heard in a YouTube interview a physicist saying that LLMs are
         | helping physics research just because any physicist out there
         | can now ask graduate-level questions about currently published
         | papers, that is, have access to knowledge that would have been
         | hard to come by before, sharing knowledge across sub-domains of
         | physics by asking ChatGPT.
        
           | yard2010 wrote:
           | Anecdotal, but I for one despise the youtube/instagram etc.
           | rabbidholes. When I'm in the mood for a good one I scroll
           | wikipedia. I had the best random conversations about what I
           | read there and it feels like I remember this forever
        
           | atlintots wrote:
           | Pakistan mentioned! Let's go!!
        
         | colechristensen wrote:
         | >you can either get it directly, or spend time researching the
         | answer, you're going to learn much more with the latter
         | 
         | A LOT of the time the things I ask LLMs for are to avoid
         | metaphorically wading through a garbage dump looking for a
         | specific treasure. Filtering through irrelevant data and
         | nonsense to find what I'm looking for is not personal
         | development. What the LLM gives back is often a very much
         | better jumping off point for looking through traditional
         | sources for information.
        
           | strix_varius wrote:
           | Often when I ask LLM things about topics I was once
           | reasonably expert in, but have spent a few months or years
           | away from, its answers provide garbage as if it were
           | treasure.
        
         | taeric wrote:
         | I had thought I saw somewhere that learning is specifically
         | better when you are wrong, if the feedback for that is rapid
         | enough. That is, "guess and check" is the quickest path to
         | learning.
         | 
         | Specifically, asking a question and getting an answer is not a
         | general path to learning. Being asked a question and you
         | answering it is. Somewhat, this is regardless of if you are
         | correct or not.
        
           | matsemann wrote:
           | I hated when doing math homework and they didn't give me the
           | answer sheet. If I could do an integral and verify if it's
           | correct or not, I could either quickly learn from my mistake,
           | or keep doing integrals with added confidence. Which is how I
           | learned the best. Gatekeeping it because someone might use
           | the answers wrong felt weird, you still had to show your
           | work.
        
             | taeric wrote:
             | Yeah. I also felt it largely went at odds with the entire
             | concept of flashcards. Which... are among the most
             | effective tools that I did not take advantage of in grade
             | school.
        
         | dumbfounder wrote:
         | Sure, if I spend one hour researching a problem vs asking AI in
         | 10 seconds, yes I will almost always learn more in the one
         | hour. But if I spend an hour asking AI questions on the same
         | subject I believe I can learn way more than by reading for one
         | hour. I think the analogy could be comparing a lecture to a
         | one-on-one tutoring session. Education needs to evolve to keep
         | up with the tools that students have at their disposal.
        
         | anigbrowl wrote:
         | I think you put your finger on it with the mention of
         | discipline. I find AI tools quite useful for giving me a quick
         | outline of things I want to play with or get up to speed on
         | fast, but not necessarily get too invested in. But if you fin
         | yourself so excited by a particular result that it sets your
         | imagination whirling, it might be time to switch out of
         | generative mode and use the AI as a tutor to deepen your actual
         | understanding, ideally in combination with books or other
         | static learning resources.
        
       | ankit219 wrote:
       | There are two aspects to this from my pov. And I think it might
       | be controversial.
       | 
       | When i have a question about any topic, and I ask Chatgpt, i
       | usually chat about more things, coming up with questions based on
       | the answer, and mostly stupid questions. I feel like I am taking
       | in the information, analyzing, and then diving deeper because I
       | am curious. This is based on how I learn about stuff. I know i
       | need to check a few things, and that it's not fully accurate, but
       | the conversation flows in a direction I like.
       | 
       | compared this to researching on the internet, there are some good
       | aspects, but more often than not, I end up reading an opinionated
       | post by someone (no matter the topic, if you go deep enough, you
       | will land on an opinionated factual telling). That feels like
       | someone decided what questions are important, what angles we need
       | to look at, and what the conclusion should be. Yes, it is
       | educational, but I am always left with lingering questions.
       | 
       | The difference is curiosity. If people are curious about a topic,
       | they will learn. If not, they are happy with the answer. And that
       | is not laziness. You cannot be curious about everything.
        
         | engineer_22 wrote:
         | Like an indefatigable, kindly professor.
        
         | regentbowerbird wrote:
         | > compared this to researching on the internet, there are some
         | good aspects, but more often than not, I end up reading an
         | opinionated post by someone (no matter the topic, if you go
         | deep enough, you will land on an opinionated factual telling).
         | 
         | ChatGPT is in fact opinionated, it has numerous political
         | positions ("biases") and holds some subjects taboo. The
         | difference is that a single actor chooses the political
         | opinions of the model that goes on to interact with many more
         | people than a single opinion piece might.
        
           | immibis wrote:
           | An example (over 1 year old): https://www.reddit.com/r/LateSt
           | ageCapitalism/comments/17dmev...
        
           | ankit219 wrote:
           | Yes that is true. Though that can be subsumed if you notice
           | it, and ask the model to ignore those biases. (an extreme
           | example would be opposition prep for a debate). I am not
           | interested in politics and other related issues anyway.
        
           | lazybreather wrote:
           | Political searches I assume would be very very minor
           | percentage of real learning. Even in such cases, I would
           | rather rely on a good LLMs response than scrounging websites
           | of mainstream media or blogs etc. For an objective response,
           | reading through opinionated articles and forming my opinion
           | is an absolute waste of time. I'd want the truth as
           | accurately as possible. Plus people don't generally change
           | political opinions based what they read. They read stuff
           | aligning with their side.
        
             | magicalist wrote:
             | > _For an objective response, reading through opinionated
             | articles and forming my opinion is an absolute waste of
             | time_
             | 
             | If the sources are all opinionated articles, per GP, that's
             | what the LLM is going to base its "objective response" on.
             | That's literally all it has as sensory input.
        
           | sanderjd wrote:
           | Fine. But it would never occur to me to try to form political
           | opinions using chatgpt.
        
             | snapcaster wrote:
             | I don't think that's modeling the risk correctly. In my
             | mind the risk is that ChatGPT's creators are able to
             | influence your political opinions _without_ you seeking
             | that out
        
               | sanderjd wrote:
               | I honestly don't see how. I haven't ever asked a question
               | that implicates politics. This is just not what I use it
               | for.
               | 
               | I understand the concern about this risk in general. I'm
               | just making a personal observation that this isn't how I
               | use these tools.
        
         | sanderjd wrote:
         | I really think the ability to ask questions entirely free from
         | all judgment is an under-emphasized aspect of the power of
         | these tools. Yes, some people are intellectually secure enough
         | to ask the "dumb" questions of other humans, but most people
         | are not, especially to an audience of strangers. I don't think
         | I ever once asked a question on Stack Overflow, because it was
         | easy to see how the question I worried might be dumb might be
         | treated by the community there. But I ask all sorts of dumb
         | questions of these models, with nary a concern about being
         | judged. I love that aspect of it.
        
           | redcobra762 wrote:
           | The tool is absolutely biased, what makes you think it
           | wouldn't be?
        
             | MarcelOlsz wrote:
             | This guy is obviously unfamiliar with Tay lol.
        
             | sanderjd wrote:
             | My comment doesn't say anything about bias...
        
           | henriquemaia wrote:
           | That's a subtle, yet important point. Putting themselves out
           | there is not easy for some. LLMs can take that pressure away.
           | 
           | The 'but' in that lies with how much freedom is given to the
           | LLM. If constrained, its refusal to answer may become a
           | somewhat triggering possibility.
        
       | _aavaa_ wrote:
       | A preprint is available on arxiv [0], see the top of page 18 for
       | what metacognitive laziness is:
       | 
       | "In the context of human-AI interaction, we define metacognitive
       | laziness as learners' dependence on AI assistance, offloading
       | metacognitive load, and less effectively associating responsible
       | metacognitive processes with learning tasks."
       | 
       | And they seem to define, implicitly, "metacognitive load" as the
       | cognitive and metacognitive effort required for learners to
       | regulate their learning processes effectively, particularly when
       | engaging in tasks that demand active self-monitoring, planning,
       | and evaluation.
       | 
       | The analogize metacognitive laziness to cognitive offloading,
       | where we have our tools do the difficult congnitive tasks for us,
       | which robs us of opportunities to develop and ultimately
       | dependent on those tools.
       | 
       | [0]: https://arxiv.org/pdf/2412.09315
        
         | MetaWhirledPeas wrote:
         | > In the context of human-AI interaction, we define
         | metacognitive laziness as learners' dependence on AI
         | assistance, offloading metacognitive load, and less effectively
         | associating responsible metacognitive processes with learning
         | tasks.
         | 
         | This sounds like parents complaining when we use Google Maps
         | instead of a folding map. Am I worse at reading a regular map?
         | Possibly. Am I better off overall? Yes.
         | 
         | Describing it as "laziness" is reductive. "Dependence on
         | [_____] assistance" is _the point of all technology_.
        
           | amrocha wrote:
           | When you're using a map you're still navigating, even if
           | you're just following directions. The act of navigating
           | teaches you spatial awareness regardless of how you got
           | there.
           | 
           | AI doesn't provide directions, it navigates for you. You're
           | actively getting stupider every time you take an LLMs answer
           | for granted, and this paper demonstrates that people are
           | likely to take answers for granted.
        
             | diggan wrote:
             | > AI doesn't provide directions, it navigates for you.
             | 
             | LLMs (try to) give you what you're asking for. If you ask
             | for directions, you'll get something that resembles that,
             | if you ask it to 100% navigate, that's what you get.
             | 
             | > and this paper demonstrates that people are likely to
             | take answers for granted.
             | 
             | Could you point out where exactly this is demonstrated in
             | this paper? As far as I can tell from the study, people who
             | used ChatGPT for the studying did better than the ones that
             | didn't, with no different in knowledge retention.
        
               | amrocha wrote:
               | Page 18 first paragraph, it talks about how ChatGPT users
               | engaged less with the editing process compared to other
               | methods. Sorry, copy and paste isn't working for some
               | reason.
        
               | MetaWhirledPeas wrote:
               | > Could you point out where exactly this is demonstrated
               | in this paper? As far as I can tell from the study,
               | people who used ChatGPT for the studying did better than
               | the ones that didn't, with no different in knowledge
               | retention.
               | 
               | This is what I observed as well. For the "metacognitive
               | laziness" bit they had to point to other studies.
        
             | danielbln wrote:
             | If I use Google Maps I ain't navigating. I follow the
             | instructions until I arrive.
        
           | aylmao wrote:
           | > "Dependence on [_____] assistance" is the point of all
           | technology.
           | 
           | I will note two things though.
           | 
           | 1. Not all technology creates "dependence". Google Maps
           | removes the need of carrying bulky maps, or buy new ones to
           | stay updated, but someone who knows how to read Google Maps
           | will know how to read a normal map, even if they're not as
           | quick at it.
           | 
           | 2. The best technology isn't defined by the "dependence" it
           | creates, or even the level of "assistance" it provides, but
           | for what it enables. Fire enabled us to cook. Metalworking
           | enabled us to create a wealth of items, tools and structures
           | that wouldn't exist if we only had wood and stone. Concrete
           | enabled us to build taller and safer. Etc.
           | 
           | It's still unclear what AI chatbots are enabling. Are LLM's
           | big claim to fame allowing people to answer problem sets and
           | emails with minimal effort? What does this unlock? There's a
           | lot of talk about allowing better data analysis, saving time,
           | and vague claims of an ai revolution, but until we see X, Y
           | and Z, and can confidently say "yeah, X, Y and Z are great
           | for mankind, and they couldn't have happened without
           | chatbots", it's fair for people to keep complaining about the
           | change and downsides AI chatbots are bringing about.
        
       | enjoyitasus wrote:
       | I think this holds water.
       | 
       | Metacognition is really how the best of the best can continue to
       | be at their best.
       | 
       | And if you don't use it, you lose it.
       | 
       | https://x.com/redshirtet/status/1879922330983358941
        
       | ziddoap wrote:
       | I'm certainly of two minds on this.
       | 
       | On one hand, this reminds me of how all of the kids were going to
       | be completely helpless in the real world because "no one carries
       | a calculator in their pocket". Then calculators became something
       | ~everyone has in their pocket (and the kids ended up just fine).
       | 
       | On the other hand, I believe in the value of "learning to learn",
       | developing media literacy, and all of the other positives gained
       | when you research and form conclusions on things independently.
       | 
       | The answer is probably somewhere in the middle: leveraging LLMs
       | as a learning aid, rather than LLMs being the final stop.
        
         | parsimo2010 wrote:
         | tl;dr: I agree.
         | 
         | We don't teach slide rules and log tables in school anymore.
         | Calculators and computers have created a huge metacognitive
         | laziness for me, and I teach calculus and have a PhD in
         | statistics. I barely remember the unit circle except for
         | multiples of pi/4 radians. I can do it in multiples of pi/6 but
         | I'm slower.
         | 
         | But guess what? I don't think I'm a worse mathematician because
         | I don't remember these things reflexively. I might be a little
         | slower getting the answer to a trivial problem, but I can still
         | find a solution to a complex problem. I look up integral forms
         | in my pocket book of integrals or on Wolfram Alpha, because
         | even if I could derive the answer myself I don't think I'd be
         | right 100% of the time. So metacognitive laziness has set in
         | for me already.
         | 
         | But I think as long as we can figure out how to stop
         | metacognitive laziness before it turns into full-fledged brain-
         | rot, then we'll be okay. We'll survive as long as we can still
         | teach students how to think critically, and figure out how to
         | let AI assist us rather than turn us into the humans on the
         | ship from Wall-E. I'm a little worried that we'll make some
         | short term mistakes (like not adapting our cirriculum fast
         | enough), but it will work out.
        
           | mlyle wrote:
           | I am not sure calculators have hurt us much on the high end
           | of mathematical ability.
           | 
           | But man I cringe when I see 18 year old students reach for a
           | calculator to multiply something by .1.
        
           | largbae wrote:
           | I think you're right at the edge of explaining why this
           | "laziness" is a good thing. Everything that we have made is
           | built on what we had before, and abstracts away what we had
           | before. 99% of us don't remember how to make even the
           | simplest Assembly program, and yet we unleash billions of
           | instructions per second on the world.
           | 
           | Even outside of math and computers, when was the last time
           | you primed a well pump or filled an oil lamp? All of these
           | tasks have been abstracted away, freeing us to focus on ever-
           | more-specialized pursuits. Those that are useful will too be
           | abstracted away, and for the better.
        
             | nottorp wrote:
             | > when was the last time you primed a well pump or filled
             | an oil lamp? All of these tasks have been abstracted away
             | 
             | They have not been abstracted away, they have been made
             | obsolete. Significant difference.
             | 
             | The danger with LLMs is people will never learn tasks that
             | are still needed.
        
               | parsimo2010 wrote:
               | Your comment exposes how much metacognitive laziness you
               | have in modern society that you didn't realize that
               | people still do these things, just not you. They aren't
               | obsolete tasks, just done at a layer you don't see.
               | 
               | I don't have to prime a well pump any more because my
               | house and workplace are hooked into the municipal water
               | system. I don't have to prime a pump because that task
               | has gotten so abstract as to become turning a faucet
               | handle. But engineers at the municipal water plant do
               | have to know how to do this task.
               | 
               | Similarly, filling an oil lamp and lighting it is now
               | abstracted for normal people as flipping a light switch
               | (maybe changing a light bulb is a more appropriate
               | comparison). But I actually have filled an oil lamp when
               | I was a kid because we kept "decorative" hurricane lamps
               | in my house that we used when the power went out. The
               | exact task of filling an oil lamp is not common, but
               | filling a generator with fuel is still needed to keep the
               | lights on in an emergency, although it is usually handled
               | by the maintenance staff of apartment buildings and large
               | office buildings.
        
         | MetaWhirledPeas wrote:
         | > On the other hand, I believe in the value of "learning to
         | learn", developing media literacy, and all of the other
         | positives gained when you research and form conclusions on
         | things independently.
         | 
         | That is not going away. Learning better prompts, learning when
         | to ignore AI, learning how to take information and _turn it
         | into something practical_. These new skills will replace the
         | old.
         | 
         | How many of us can still...
         | 
         | - Saddle a horse
         | 
         | - Tell time without a watch
         | 
         | - Sew a shirt
         | 
         | - Create fabric to sew a shirt
         | 
         | - Hunt with primitive tools
         | 
         | - Make fire
         | 
         | We can shelter children from AI, or we can teach them how to
         | use it to further themselves. Talk to the Amish if you want to
         | see how it works out when you forgo anything that _feels_ too
         | futuristic. A respectable life, sure. But would any of us
         | reading this choose it?
        
           | ziddoap wrote:
           | > _How many of us can still... <stuff>_
           | 
           | Yes, this is what I meant by the calculator part of my
           | comment. You've got some other good examples.
           | 
           | > _learning when to ignore AI, learning how to take
           | information and turn it into something practical._
           | 
           | This is what I meant by using LLMs as a tool rather than an
           | end.
        
             | skydhash wrote:
             | How many of us still have to do these things? You either
             | eed to do them or you don't. If you do, you will learn how
             | or find someone that do.
             | 
             | We still need to calculate numbers and I can say it's silly
             | if I find someone need to get a calculator to do 5x20. Same
             | if you're taking hours and multiple sheets of paper for
             | something that will take you a few minutes with a
             | calculator. There's a question of scale and basic
             | understanding that divides the two.
        
               | ziddoap wrote:
               | > _How many of us still have to do these things?_
               | 
               | Yep, we agree. That's the whole point of what I said in
               | the first half of my original comment.
               | 
               | At one time, they were common skills. Things changed,
               | they aren't common, they aren't really needed (for most
               | people), and everyone is doing just fine without them.
               | We've freed up time and mental capacity for other
               | (hopefully more beneficial) tasks.
               | 
               | (I'm confused why this reply and the other above it are
               | are just restating the first part of my original comment,
               | but framing it like it's not a restatement).
        
         | 65 wrote:
         | It's astounding to me that people just like... always trust
         | whatever the LLM says.
         | 
         | I have some friends who use ChatGPT for everything. From doing
         | work to asking simple questions. One of my friends wanted a bio
         | on a certain musician and asked ChatGPT. It's a little
         | frightening he couldn't, you know, read the Wikipedia page of
         | this musician, where all of the same information is and there
         | are sources for this material.
         | 
         | My mom said she used ChatGPT to make a "capsule wardrobe" for
         | her. I'm thinking to myself (I did not say this to her)... you
         | can't just like look at your clothes and get rid of ones you
         | don't wear? Why does a computer need to make this simple
         | decision?
         | 
         | I'm really not sure LLMs should ever be used as a learning aid.
         | I have never seen a reason to use them over, you know,
         | searching something online. Or thinking of your own creative
         | story. If someone can make a solid use case as to why LLMs are
         | useful I would like to hear.
        
           | kridsdale1 wrote:
           | Regarding your mom's clothes: she wasn't asking the machine
           | to give advice she couldn't think of on her own, she was
           | seeking external validation and permission to purge and
           | override the hoarder urge of her personality.
           | 
           | This is like when CEOs hire outside consulting firms to do
           | layoffs for them. Pinning the pain of loss on some scapegoat
           | makes it more bearable.
        
           | nottorp wrote:
           | > One of my friends wanted a bio on a certain musician and
           | asked ChatGPT.
           | 
           | I use ChatGPT (or Gemini) instead of web searches. You can
           | blame the content and link farms that are top of the search
           | results, and the search engines focusing on advertising
           | instead of search, because we're the product.
           | 
           | Why your friend doesn't know about wikipedia is another
           | matter, if i wanted a generic info page about some topic i'd
           | go directly there. But if i wanted to know if Bob Geldof's
           | hair is blue, I might ask a LLM instead of reading the whole
           | wikipedia page.
           | 
           | I also ask LLMs for introductory info about programming
           | topics i don't know about, because i don't want to go to
           | google and end up on w3schools, geeksforgeeks and crap like
           | that.
           | 
           | I don't really trust LLMs for advanced programming topics,
           | you know, what people pay me for. But they're fine for giving
           | me a function signature or even a small example.
        
             | 65 wrote:
             | You can use source material instead of LLMs for all of
             | this.
             | 
             | "Is Bob Geldof's hair blue?" -> Search for Bob Geldof ->
             | Look at images of Bob Geldof.
             | 
             | Intro programming topics can be found at the documentation
             | of the website. Your searching query might be "[programming
             | topic] getting started" and usually if it's a package or a
             | tool there will be documentation. If you want good
             | documentation on web dev stuff that isn't w3schools or
             | geeksforgeeks you can use MDN documentation.
             | 
             | Or, if you really want a general overview there's probably
             | a YouTube video about the topic.
             | 
             | Additionally appending "reddit" to a search will give
             | better results than SEO junk. There are always ways to find
             | quality information via search engines.
        
               | nottorp wrote:
               | > "Is Bob Geldof's hair blue?" -> Search for Bob Geldof
               | -> Look at images of Bob Geldof
               | 
               | Assuming I get images of Bob Geldof. More likely the
               | first page will be pinterest login-required results.
               | 
               | > there's probably a YouTube video about the topic.
               | 
               | Life's too short to watch talking heads about ... you
               | know, WRITING code ...
               | 
               | > can be found at the documentation of the website
               | 
               | Seriously? Maybe for the top 500 npm packages. Not for
               | the more obscure libraries that may have only some
               | doxygen generated list of functions at best.
        
               | 65 wrote:
               | > Assuming I get images of Bob Geldof. More likely the
               | first page will be pinterest login-required results.
               | 
               | You do realize Google/Bing/DDG/Kagi all have an Images
               | tab, right? Come on.
               | 
               | > Life's too short to watch talking heads about ... you
               | know, WRITING code ...
               | 
               | If I want a high level overview of what the thing even
               | is, a YouTube video can be useful since there will be
               | explanations and visual examples. You can read
               | documentation as well. For example, if I want a slower
               | overview of something step by step, or a talk at a
               | conference about why to use this thing, YouTube can be
               | helpful. I was just looking at videos about HTMX this
               | weekend, hearing presentations by the authors and some
               | samples. That's not saying if I actually use the thing I
               | won't be reading the documentation, it's more just useful
               | for understand what the thing is.
               | 
               | > Seriously? Maybe for the top 500 npm packages. Not for
               | the more obscure libraries that may have only some
               | doxygen generated list of functions at best.
               | 
               | How do you expect your LLM to do any better? If you're
               | using some obscure package there will probably be
               | documentation in the GitHub README somewhere. If it's
               | horrible documentation you can read the Typescript types
               | or do a code search on GitHub for examples.
               | 
               | This is all to say that I generally don't trust LLM
               | output because I have better methods of finding the
               | information LLMs are trained on. And no hallucinations.
        
           | twobitshifter wrote:
           | I agree, at first I thought gpt would be used by tech savvy
           | folk, but now it is clear that it's becoming a crutch. My
           | friend couldn't respond to an email without it.
        
         | twobitshifter wrote:
         | I was taught to not use calculators on exams and homework and
         | that's why I am able to do math in my head today.
         | 
         | I have recently seen GenZ perplexed by card games with addition
         | and making change. For millennials, this is grade school stuff.
        
           | ziddoap wrote:
           | Sure, there's obviously a scale.
           | 
           | I'm not about to divide 54,432 by 7.6, even though I was
           | taught how to. I'll pull out my phone.
           | 
           | On the other end, I'm not going to pull out my phone to
           | figure out I owe you $0.35.
           | 
           | I think the point I was trying to make still stands.
        
       | robviren wrote:
       | This technology is arguably as ubiquitous as a calculator. So
       | long as I can understand that generative AI is a tool and not a
       | solution is it bad to treat it like a bit of a calculator? Does
       | this metacognitive laziness apply to those who depend on
       | calculators?
       | 
       | I understand it is a bit apples to oranges, but I'm curious
       | peoples take.
        
         | alternatex wrote:
         | I am definitely lazier today in regards to doing math in my
         | head compared to when I was young.
         | 
         | I think a comparison with calculators is possible, but the
         | degree to which calculators are capable of assisting us is so
         | incomparably smaller that the comparison would be meaningless.
         | 
         | Smart phones changed society a lot more than calculators did
         | and now AI is starting to do the same, albeit in a more subtle
         | manner.
         | 
         | Treating AI like it's just a calulator seems naive/optimistic.
         | We're still reeling from the smart phone revolution and have
         | not solved many of the issues it brought upon its arrival.
         | 
         | I have a feeling the world has become a bit cynical and less
         | motivated to debate how to approach these major technological
         | changes. It's been too many of them in too short of a time and
         | now everyone has a whatever attitude towards the problems these
         | adcancements introduce.
        
       | teekert wrote:
       | Idk, the "explain {X} to me like I'm 12" has certainly helped my
       | delve into new topics, Nix with Flakes comes to mind as one of my
       | latest ventures.
        
       | charlie0 wrote:
       | I mean this is the same exact thing that happened when
       | calculators where invented. The amount of people who can count in
       | their heads drastically dropped because why waste your time?
       | Ditto for when maps app came out. No more need to memorize a
       | bunch of locations because you can just use maps to take you
       | there.
        
         | hb-robo wrote:
         | It's funny, the calculators were incredibly politicized when I
         | was growing up (TI84 generation, so kids were getting caught
         | programming functions to solve exam questions) but GPS was just
         | taken as a given.
        
       | floppiplopp wrote:
       | I'm at this very moment testing deepseek-r1, a so called
       | "reasoning" llm, on the excellent "rustlings" tutorial. It is
       | well documented and its solutions are readily available online.
       | It is my lazy go-to-testing for coding tasks to assess for me if
       | and when I have to start looking for a new job and take up
       | software engineering as a hobby. The reason I test with rustlings
       | is to also assess the value as a learning tool for students and
       | future colleagues. Maybe these things have use as a teacher?
       | Also, the rust compiler is really good in offering advice, so
       | there's an excellent baseline to compare the llm-output.
       | 
       | And well, let me put it this way: deepseek-r1 won't be replacing
       | anyone anytime soon. It generates a massive amount of texts,
       | mostly nonsensical and almost always terribly, horribly wrong.
       | But inexperienced devs or beginners, especially beginners, will
       | be confused and will be led down the wrong path, potentially
       | outsourcing rational thought to something that just sounds good,
       | but actually isn't.
       | 
       | Currently, over-reliance on the ramblings of a statistical model
       | seems detrimental to education and ultimately the performance of
       | future devs. As the probably last generation of old school
       | software engineers, who were trained on coffee and tears of
       | frustration, who had to really work code and architecture
       | themselves, golden times might lie ahead, because someone will
       | have to fix the garbage produced en masse by llms.
        
         | diggan wrote:
         | > And well, let me put it this way: deepseek-r1 won't be
         | replacing anyone anytime soon. It generates a massive amount of
         | texts, mostly nonsensical and almost always terribly, horribly
         | wrong. But inexperienced devs or beginners, especially
         | beginners, will be confused and will be led down the wrong
         | path, potentially outsourcing rational thought to something
         | that just sounds good, but actually isn't.
         | 
         | Are you considering the full "reasoning" it does when you're
         | saying this? AFAIK, they're meant to be "rambling" like that,
         | exploring all sorts of avenues and paths before reaching a
         | final conclusive answer that is still "ramble-like". I think
         | the purpose seems to be to layer something on top that can
         | finalize the answer, rather than just taking whatever you get
         | from that and use it as-is.
         | 
         | > Currently, over-reliance on the ramblings of a statistical
         | model seems detrimental to education and ultimately the
         | performance of future devs. As the probably last generation of
         | old school software engineers, who were trained on coffee and
         | tears of frustration, who had to really work code and
         | architecture themselves, golden times might lie ahead, because
         | someone will have to fix the garbage produced en masse by llms.
         | 
         | I started coding just before Stack Overflow got popular, and
         | remember the craze when it did get popular. Blogposts about how
         | Stack Overflow will create lazy devs was all over the place,
         | people saying it was the end of the real developer. Not arguing
         | against you or anything, I just find it interesting how
         | sentiments like these keeps repeating over time, just minor
         | details that change.
        
       | _the_inflator wrote:
       | What did the researchers expect?
       | 
       | Humans are lazy by nature, they seek shortcuts.
       | 
       | So given the chance to go rote learning for years for an
       | education which in most cases is simply a soon to be forgotten
       | certification vs watching TikTok while letting ChatGPT do the
       | lifting - this is all predictable, even without Behavioral
       | Design, Hooked etc.
       | 
       | And that usually the benefits rise with IQ level - nothing new
       | here, that's the very definition of IQ.
       | 
       | Learning and academia is hard, and even harder for those with
       | lower IQ scores.
       | 
       | A fool with a tool is still a fool and vice versa.
       | 
       | Motivation seems also at an all time low. Why put in hours when a
       | prompt can works wonders?
       | 
       | Reading a book is a badge of honor nowadays more than ever.
        
         | diggan wrote:
         | > So given the chance to go rote learning for years for an
         | education which in most cases is simply a soon to be forgotten
         | certification vs watching TikTok while letting ChatGPT do the
         | lifting - this is all predictable, even without Behavioral
         | Design, Hooked etc.
         | 
         | Would you argue that having books/written words also made
         | people more lazy and be able to remember less? Because some
         | people argued (at the time) that having written words would
         | make humanity less intellectual as a whole, but I think
         | consensus is that it led to the opposite.
        
         | n4r9 wrote:
         | > the benefits rise with IQ level - nothing new here, that's
         | the very definition of IQ
         | 
         | This is not obvious to me, and certainly is not the
         | "definition" of IQ. There are tools that become less useful the
         | more intelligent you are, such as multiplication tables. IQ is
         | defined by a set of standardized tests that attempt to quantify
         | human intelligence, and has some correlations with social,
         | educational and professional performance, but it's not clear
         | why it would help with use of AI tools.
        
       | nottorp wrote:
       | Funny, I passed the link to a whatsapp group with some friends
       | and the preview loaded with the title "error: cookies turned
       | off".
       | 
       | I'm sure my friends will RUSH to read the article now...
        
       | submeta wrote:
       | My observation is that I learn more than ever using LLMs.
       | 
       | I tend to learn asking questions, I did this using Anki cards for
       | years (What is this or that?) and find the answer on the back of
       | the index card. Questions activate my thinking more than
       | anything, and of course my attempt at answering the question in
       | my own terms.
       | 
       | My motto is: Seek first to understand, then to be understood
       | (Covey). And I do this in engaging with people or a topic---by
       | asking questions.
       | 
       | Now I do this with LLMs. I have been exploring ideas I would
       | never have explored hadn't there been LLMs, because I would not
       | have had the to research material for learning, read it, create
       | material in a Q&A session for me.
       | 
       | I even use LLMs to convert an article into Anki cards using
       | Obsidian, Python, LLMs, and the Anki app.
       | 
       | Crazy times we are in.
        
         | boromi wrote:
         | What does your workflow look like?
        
           | submeta wrote:
           | I use functions in openai and a template that forces the LLM
           | to generate questions and answers from a text in a format
           | that can be synced into the Anki app. Very straightforward
           | workflow.
        
             | boromi wrote:
             | Very interesting, would love a more detailed tutorial on
             | setting something similar up
        
         | polishdude20 wrote:
         | Yeah I've found the same. I might have some surface
         | understanding of some topic and I like just asking "am I right
         | in thinking this and this about this?" Or "Tell me why I'm
         | wrong about this".
        
         | david_allison wrote:
         | > Questions activate my thinking more than anything, and of
         | course my attempt at answering the question in my own terms.
         | 
         | This is very well-studied:
         | https://en.wikipedia.org/wiki/Testing_effect [not a high-
         | quality article, but should give an overview]
        
       | bradarner wrote:
       | Any time an empirical research project has to add QUOTES around a
       | common term, it sets off the non-sense radar:
       | 
       | ..."laziness"...
       | 
       | In the battle cry of the philosopher: DEFINE YOUR TERMS!!
       | 
       | What they really mean: new and different. Outside-the-box. "Oh
       | no, how will we grade this?!?" a threat to our definition and
       | control of knowledge.
        
       | vunderba wrote:
       | I've been calling this out since OpenAI first introduced ChatGPT.
       | 
       | The danger in ubiquitously available LLMs, which seemingly have
       | an answer to any question, isn't necessarily their existence.
       | 
       | The _real danger_ lies in their seductive nature - over how
       | tempting it becomes to immediately reach for the nearest LLM to
       | provide an answer rather than taking a few moments to quietly
       | ponder the problem on your own. That act of manipulating the
       | problem in your head--critical thinking--is ultimately a craft.
       | And the only way to become better at it is by practicing it in a
       | deliberate, disciplined fashion.
        
         | motorest wrote:
         | > The real danger lies in their seductive nature - over how
         | tempting it becomes to immediately reach for the nearest LLM to
         | provide an answer rather than taking a few moments to quietly
         | ponder the problem on your own.
         | 
         | I get the point you're trying to make. However, quietly
         | pondering the problem is only fruitful if you have the right
         | information. If you don't, best case scenario you risk wasting
         | time reinventing the wheel for no good reason. In this
         | application, a LLM is just the same type of tool as Google: a
         | way to query and retrieve information cor you to ingest. Like
         | Google, the info you get from queries is not the end but the
         | means.
         | 
         | As the saying goes, a month in the lab saves you a week in the
         | library. I would say it can also save you 10 minutes with
         | Claude/ChatGPT/Copilot.
         | 
         | Is hiring a private tutor also laziness?
        
           | abathur wrote:
           | I'll stop short of asserting you don't, but I'm having a hard
           | time convincing myself your reply does reflect that you get
           | GP's point.
           | 
           | If I were to reframe GP's point, it would be: having to
           | figure out how to answer a question changes you a little.
           | Over time, it changes you a lot.
           | 
           | Yes, of course, there is a perspective from which a month
           | spent in the lab to answer a question that's well-settled in
           | the literature is ~wasted. But the GP is arguing for a
           | utility function that optimizes for improving the questioner.
           | 
           | Quietly pondering the problem with the wrong information can
           | be fruitful in this context.
           | 
           | (To be pragmatic, we need both of these. We'd get nowhere if
           | we had to solve every problem and learn every lesson from
           | first principles. But we'd also get nowhere if no one were
           | well-prepared and motivated to solve novel problems without
           | prior art.)
        
           | Arainach wrote:
           | >wasting time reinventing the wheel for no good reason
           | 
           | Nearly all of learning relies on reinventing the wheel. Most
           | personal projects involve reinventing wheels, but improving
           | yourself by doing so.
        
             | aylmao wrote:
             | Very much this.
             | 
             | Some of the most memorable moments I had in my learning
             | were when I "reinvented" something. In high-school, our
             | math teacher had us reinvent the derivative rules, and
             | later had us derive Euler's identity through Taylor Series.
             | They were big eureka moments. Going through all the work
             | someone else did hundreds of years ago is very inspiring,
             | and IMO gets you in the right mindset for discovery. I
             | can't imagine where the joy of learning comes for someone
             | who sees learning as a test --a question, an answer,
             | nothing in between.
             | 
             | In uni we built a CPU from scratch over the course of a few
             | weeks. First building an small ALU, widening its bus,
             | adding memory operations, etc. Beyond learning how things
             | work, it makes you wonder how inventing this without a
             | teacher to guide you must've been, and gives you an
             | appreciation for it. It also makes you extrapolate and
             | think about the things that haven't been invented or
             | discovered yet.
             | 
             | In theory LLMs could serve as a teacher guiding you as you
             | reinvent things. In practice, people just get the answer
             | and move on. A person with experience teaching, who sees
             | how you're walking the path and compares it to how they
             | walked theirs, will know when to give you an answer and
             | when to have you find it yourself.
             | 
             | One doesn't learn how to do lab-work in the library.
        
         | EthanHeilman wrote:
         | I recognize this problem, but I find in my own uses of ChatGPT
         | it actually allows me to overcome my laziness rather than
         | making it worse.
         | 
         | I'll have a problem that I want to work on but getting started
         | is difficult. Asking ChatGPT is almost frictionless, the next
         | thing I know I'm working on the project, 8 hours go by and I'm
         | done. When I get stuck on some annoying library installation,
         | ChatGPT solves if for me so I don't get frustrated. It allows
         | me to enter and maintain flow states better than anything else.
         | 
         | ChatGPT is a really good way of avoiding procrastination.
        
           | sebmellen wrote:
           | I've found the same. Claude outputs are usually not good
           | enough for what I'm looking for but the conversation is
           | enough to get me engaged and started on a project.
        
           | mwpmaybe wrote:
           | There's something magical about ChatGPT giving you a mostly-
           | wrong answer.
        
         | LeafItAlone wrote:
         | I think this is where my physical laziness benefits me. I'm
         | often too lazy to spend the time to fully describe the problem
         | to the LLMs and wrap it in a prompt that will produce
         | something, in written text, so I think through it first.
         | Usually I solve it myself or think of a better primary source.
        
           | danielbln wrote:
           | I'll say that there is value in the rubber duck process, and
           | LLMs make wonderful rubber ducks.
        
         | chrisco255 wrote:
         | LLMs have taught me something that I sort of already knew from
         | Hitchhiker's Guide to the Galaxy: the key to problem solving is
         | asking the right question in the first place. It's not
         | dangerous that answers can be retrieved quickly. Indeed, people
         | had the same things to say about Google in the 90s or pocket
         | calculators in the 70s. To me LLMs just speed up the process by
         | which I would have manually searched the internet for in the
         | first place. The only way to get good at critical thinking is
         | to ask more questions.
        
       | wsintra2022 wrote:
       | Inevitably the advancement of knowledgeable information
       | generation will have same mental effect as having a contact list
       | on your phone. When I was a kid I knew at least 5 peoples phone
       | numbers maybe more. Even now I can recall 2 of them. How many can
       | you recall from your actual contact list?
        
       | golly_ned wrote:
       | I don't see how the "metacognitive laziness" (a term used by the
       | abstract, but not defined) follows from what they describe in the
       | abstract as the outcomes they observed. They specifically called
       | out no difference in post-task intrinsic motivation; doesn't that
       | imply that the ChatGPT users were no lazier after using ChatGPT
       | than they were before?
       | 
       | I'm also a skeptic of students using and relying on ChatGPT, but
       | I'm cautious about using this abstract to come to any conclusions
       | without seeing the full paper especially given that they're
       | apparently using "metacognitive laziness" in a specific technical
       | way we don't know about if we haven't read the paper.
        
       | tippytippytango wrote:
       | This is not a concern when you are responsible for real results.
       | If you aren't responsible for real results you can pass off the
       | good rhetoric of these models as an "answer". But when you need
       | results you realize most answers they give are just rhetoric.
       | They are still extremely valuable, but they can only help you
       | when you have done the work to get deep understanding of the
       | problem, incentivized by actually solving it.
        
       | jmmcd wrote:
       | In my recent programming exam (in an MSc in AI), I asked students
       | to reflect on how generative AI has changed their coding. Almost
       | all remarked that it's a great time-saver, but it makes them lazy
       | and worse at coding.
       | 
       | And yes indeed, their ability to answer basic questions about
       | coding on the same exam has drastically dropped versus last year.
        
         | dragonwriter wrote:
         | Is the problem the use of AI in coding, or using AI in coding
         | in a curriculum designed without that assumption? Because if AI
         | _is_ an effort-saver, than a curriculum that isn 't designed
         | with its use in mind will just result in the students doing
         | less work, in which case learning less is unsurprising but not
         | really an "AI makes you less knowledgeable" problem but an
         | "insufficiently challenging curriculum for the actual context"
         | problem.
        
       | lxe wrote:
       | Before pervasive GPS, it took me very little time to actually
       | learn and internalize a route. Now it takes a lot longer to
       | remember it when you're constantly guided. Same exact thing is
       | happening with guided reasoning we get with LLMs
        
         | numba888 wrote:
         | I have different experience. It took me some time to make a
         | rote and write down all turns. Now getting from location A to B
         | is a lot easier. Take a look at proposed rote and make some
         | corrections. Meanwhile I spend time thinking about something
         | else. So, GPS doesn't make me stupid or forgetful. It's just a
         | tool which makes me more productive. The same almost true for
         | LLM, except getting the right answer isn't always easy or
         | possible. But overall on coding small utilities it's very
         | helpful. For reasoning models I still need to find the right
         | tasks. May be more complex utilities. Or the one I can't get
         | from 4o yet: red-black tree with custom memory management and
         | custom 'pointers' in data objects (small integers). While
         | custom allocators are supported by std, the implementation
         | still keeps native pointers, which locks it in memory.
        
       | SpaceManNabs wrote:
       | As technology gets more impressive, we internalize less knowledge
       | ourselves.
       | 
       | There is a "plato" story on how he laments the invention of
       | writing because now people don't need to memorize speeches and
       | stuff.
       | 
       | I think there is a level of balance. Writing gave us enough
       | efficiencies that the learned laziness made us overall more
       | effective.
       | 
       | The internet in 2011 made us a bit less effective. I am not gonna
       | lie; I spent a lot more time being able to get resources, whereas
       | I would have to struggle on my own to solve a problem. You
       | internalize one more than the other, but is it worth the
       | additional time every time?
       | 
       | I worry about current students learning through LLMs just like I
       | would worry about a student in 2012 graduating in physics when
       | such a student had constant access to wolfram alpha.
        
       | wisty wrote:
       | This is the old "siiiiiir why do we need to do this if we have
       | calculators"? It matters -
       | https://www.edweek.org/education/little-numbers-add-up-to-bi...
       | Students who know the facts will be better at math.
       | 
       | Even if the computer is doing all the thinking, it's still a
       | tool. Do you know what to ask it? Can you spot a mistake when it
       | messes up (or you messed up the input)? Can you simplify the
       | problem and figure out what the important parts of the problem
       | are? Do you even know to do any of that?
       | 
       | Sure, thinking machines will sometimes be autonomous and not need
       | you to touch them. But when that's the case, your job won't be to
       | just nod along to everything the computer says, you won't have a
       | job anymore and you will need to find a new job (probably one
       | where you need to prompt and interpret what the AI is doing).
       | 
       | And yes, there will be jobs where you just act as an actuator for
       | the thinking machine. Ask an Amazon warehouse worker how great a
       | job that is :/
       | 
       | Everything is the same as with calculators.
        
       ___________________________________________________________________
       (page generated 2025-01-21 23:01 UTC)