[HN Gopher] Study mode
       ___________________________________________________________________
        
       Study mode
        
       Author : meetpateltech
       Score  : 583 points
       Date   : 2025-07-29 17:01 UTC (5 hours ago)
        
 (HTM) web link (openai.com)
 (TXT) w3m dump (openai.com)
        
       | hahahacorn wrote:
       | Ah, the advancing of humanity. A bespoke professor-quality
       | instructor in everyone's pocket (or local library) available
       | 24/7.
       | 
       | Happy Tuesday!
        
         | Spivak wrote:
         | Professor might be overselling it but lecturer for undergrad
         | and intro graduate courses for sure.
        
           | qeternity wrote:
           | I think this is overselling most professors.
        
           | cma256 wrote:
           | It's better than a professor in some respects. A professor
           | can teach me about parser combinators but they probably can't
           | teach me about a specific parser combinator library.
           | 
           | There's a lot of specificity that AI can give over human
           | instruction however it still suffers from lack of rigor and
           | true understanding. If you follow well-trod paths its better
           | but that negates the benefit.
           | 
           | The future is bright for education though.
        
             | bloomca wrote:
             | I am not really sure how bright the future is.
             | 
             | Sure, for some people it will be insanely good: you can go
             | for as stupid questions as you need without feeling
             | judgement, you can go deeper in specific topics, discuss
             | certain things, skip some easy parts, etc.
             | 
             | But we are talking about averages. In the past we thought
             | that the collective human knowledge available via the
             | Internet will allow everyone to learn. I think it is fair
             | to say that it didn't change much in the grand scheme of
             | things.
        
           | tempfile wrote:
           | Overselling is not the right word exactly. For some questions
           | it will have professor-level understanding, and for other
           | questions it will have worse-than-idiot-level understanding.
           | Hopefully the students are able to identify which :-)
        
             | MengerSponge wrote:
             | I've found it generally has professor-level understanding
             | in fields that are not your own.
             | 
             | (Joke/criticism intended)
        
       | volkk wrote:
       | Not seeing it on my account, guess the roll out is actively
       | happening (or gradual)?
        
         | koakuma-chan wrote:
         | Me neither. Do you have the subscription? Maybe it's not on the
         | free plan.
        
           | zeppelin101 wrote:
           | I have the $20 tier and I'm not seeing it, either.
           | 
           | EDIT: literally saw it just now after refreshing. I guess
           | they didn't roll it out immediately to everyone.
        
       | swader999 wrote:
       | Why do I still feel like I'll be paying hundreds of thousands of
       | dollars for my children's education when all they're going to do
       | is all learn through AI anyway.
        
         | nemomarx wrote:
         | Well, you're generally paying for the 8 hour daycare part
         | before the education, right? That still needs human staff
         | around unless you're doing distance learning
         | 
         | e: if you mean university, fair. that'll be an interesting
         | transition. I guess then you pay for the sports team and
         | amenities?
        
           | LordDragonfang wrote:
           | At that price tag I assume they're referring to college, not
           | grade school, so the "daycare" portion isn't relevant.
        
           | Scubabear68 wrote:
           | No.
           | 
           | In the US at least, most kids are in public schools and the
           | collective community foots the bill for the "daycare", as you
           | put it.
        
         | hombre_fatal wrote:
         | At my university I took a physics course where the homework was
         | always 4-6 gimmick questions or proofs that were so hard that
         | we would form groups after class just to copy whoever could
         | divine the solutions.
         | 
         | I ultimately dropped the course and took it in the summer at a
         | community college where we had the 20-30 standard practice
         | problem homework where you apply what you learned in class and
         | grind problems to bake it into core memory.
         | 
         | AI would have helped me at least get through the uni course.
         | But generally I think it's a problem with the school/class
         | itself if you aren't learning most of what you need in class.
        
           | teeray wrote:
           | > or proofs that were so hard that we would form groups after
           | class just to copy whoever could divine the solutions.
           | 
           | These groups were some of the most valuable parts of the
           | university experience for me. We'd get take-out, invade some
           | conference room, and slam our heads against these questions
           | well into the night. By the end of it, sure... our answers
           | looked superficially similar, but it was because we had built
           | a mutual, deep understanding of the answer--not just copying
           | the answers.
           | 
           | Even if you had only a rough understanding, the act of trying
           | to teach it again to others in the group made you both
           | understand it better.
        
             | hombre_fatal wrote:
             | I'm glad your groups were great, but this class was
             | horrible and probably different from what you're thinking
             | of. We weren't physics majors. We were trying to
             | credentialize in a textbook, not come up with proofs to
             | solve open ended riddles that most people couldn't solve.
             | The homework should drill in the information of the class
             | and ensure you learn the material.
             | 
             | And we literally couldn't figure it out. Or the group you
             | were in didn't have a physics rockstar. Or you weren't so
             | social or didn't know anyone or you just missed an
             | opportunity to find out where anyone was forming a group.
             | It's not like the groups were created by the class. I'd
             | find myself in a group of a few people and we just couldn't
             | solve it even though we knew the lecture material.
             | 
             | It was a negative value class that cost 10x the price of
             | the community college course yet required you to teach
             | yourself after a lecture that didn't help you do the
             | homework. A total rip-off.
             | 
             | Anyways, AI is a value producer here instead of giving up
             | and getting a zero on the homework.
        
         | wiseowise wrote:
         | Because you're not paying for knowledge, you're paying for a
         | paper from respectable university saying that your kid is part
         | of the club.
        
           | Aperocky wrote:
           | How about experience - those years of life.
        
             | rapfaria wrote:
             | "Toby is today's designated signer for Eletromagnetics
             | 302."
        
         | Workaccount2 wrote:
         | And then compete with the same AI that taught them their degree
         | for a job with their degree.
        
           | Aperocky wrote:
           | A bit optimistic here are we?
        
       | jryio wrote:
       | I would like to see randomized control group studies using study
       | mode.
       | 
       | Does it offer meaningful benefits to students over self directed
       | study?
       | 
       | Does it out perform students who are "learning how to learn"?
       | 
       | What affect does allowing students to make mistakes have compared
       | to being guided through what to review?
       | 
       | I would hope Study Mode would produce flash card prompts and
       | quantize information for usage in spaced repetition tools like
       | Mochi [1] or Anki.
       | 
       | See Andy's talk here [2]
       | 
       | [1] https://mochi.cards
       | 
       | [2] https://andymatuschak.org/hmwl/
        
         | righthand wrote:
         | It doesn't do any of that, it just captures the student market
         | more.
         | 
         | They want a student to use it and say "I wouldn't have learned
         | anything without study mode".
         | 
         | This also allows them to fill their data coffers more with
         | bleeding edge education. "Please input the data you are
         | studying and we will summarize it for you."
        
           | echelon wrote:
           | Such a smart play.
        
           | LordDragonfang wrote:
           | > It doesn't do any of that
           | 
           | Not to be contrarian, but do you have any evidence of this
           | assertion? Or are you just confidently confabulating a
           | response for something outside of the data you've been
           | exposed to? Because a commentor below provided a study that
           | directly contradicts this.
        
             | righthand wrote:
             | A study that directly contradicts what exactly?
        
           | precompute wrote:
           | Bingo. The scale they're operating at, new features don't
           | have to be useful, they only need to look like they are for
           | the first few minutes.
        
         | tempfile wrote:
         | I would also be interested to see whether it outperforms
         | students doing literally nothing.
        
         | CobrastanJorji wrote:
         | Come on. Asking an educational product to do a basic sanity
         | test as to whether it helps is far too high a bar. Almost no
         | educational app does that sort of thing.
        
         | theodorewiles wrote:
         | https://www.nature.com/articles/s41598-025-97652-6
         | 
         | This isn't study mode, it's a different AI tutor, but:
         | 
         | "The median learning gains for students, relative to the pre-
         | test baseline (M = 2.75, N = 316), in the AI-tutored group were
         | over double those for students in the in-class active learning
         | group."
        
           | Aachen wrote:
           | I wonder how much this was a factor:
           | 
           | "The occurrence of inaccurate "hallucinations" by the current
           | [LLMs] poses a significant challenge for their use in
           | education. [...] we enriched our prompts with comprehensive,
           | step-by-step answers, guiding the AI tutor to deliver
           | accurate and high-quality explanations (v) to students. As a
           | result, 83% of students reported that the AI tutor's
           | explanations were as good as, or better than, those from
           | human instructors in the class."
           | 
           | Not at all dismissing the study, but to replicate these
           | results for yourself, this level of gain over a classroom
           | setting may be tricky to achieve without having someone make
           | class materials for the bot to present to you first
           | 
           | Edit: the authors further say
           | 
           | "Krupp et al. (2023) observed limited reflection among
           | students using ChatGPT without guidance, while Forero (2023)
           | reported a decline in student performance when AI
           | interactions lacked structure and did not encourage critical
           | thinking. These previous approaches did not adhere to the
           | same research-based best practices that informed our
           | approach."
           | 
           | Two other studies failed to get positive results at all. YMMV
           | a _lot_ apparently (like, all bets are off and your learning
           | might go in the negative direction if you don 't do
           | everything exactly as in this study)
        
         | apwell23 wrote:
         | it makes difference to students who are already motivated. that
         | was the case with youtube.
         | 
         | unfortunately that group is tiny and getting tinier due to
         | dwindling attention span.
        
         | viccis wrote:
         | I would be interested to see if there have already been studies
         | about the efficacy of tutors at good colleges. In my experience
         | (in academia), the students who make it into an Ivy or an elite
         | liberal arts school make extensive use of tutor resources, but
         | not in a helpful way. They basically just get the tutor to work
         | problems for them (often their homework!) and feel like they've
         | "learned" things because tough questions always seems so
         | obvious when you've been shown the answer. In reality, what it
         | means it that they have no experience being confused or having
         | to push past difficult things they were stuck on. And those
         | situations are some of the most valuable for learning.
         | 
         | I bring this up because the way I see students "study" with
         | LLMs is similar to this misapplication of tutoring. You try
         | something, feel confused and lost, and immediately turn to the
         | pacifier^H^H^H^H^H^H^H ChatGPT helper to give you direction
         | without ever having to just try things out and experiment. It
         | means students are so much more anxious about exams where they
         | don't have the training wheels. Students have always wanted
         | practice exams with similar problems to the real one with the
         | numbers changed, but it's more than wanting it now. They
         | outright expect it and will write bad evals and/or even
         | complain to your department if you don't do it.
         | 
         | I'm not very optimistic. I am seeing a rapidly rising trend at
         | a very "elite" institution of students being completely
         | incapable of using textbooks to augment learning concepts that
         | were introduced in the classroom. And not just struggling with
         | it, but lashing out at professors who expect them to do reading
         | or self study.
        
         | posix86 wrote:
         | There's studies showing that LLM makes experienced devs slower
         | in their work. I wouldn't be surprised if it was the same for
         | self study.
         | 
         | However consider the extent to which LLMs make the learning
         | process more enjoyable. More students will keep pushing because
         | they have someone to ask. Also, having fun & being motivated is
         | such a massive factor when it comes to learning. And, finally,
         | keeping at it at 50% the speed for 100% the material always
         | beats working at 100% the speed for 50% the material. Who cares
         | if you're slower - we're slower & faster without LLMs too!
         | Those that persevere aren't the fastest; they're the ones with
         | the most grit & discipline, and LLMs make that more accesible.
        
           | snewman wrote:
           | I presume you're referring to the recent METR study. One
           | aspect of the study population, which seems like an important
           | causal factor in the results, is that they were working in
           | large, mature codebases with specific standards for code
           | style, which libraries to use, etc. LLMs are much better at
           | producing "generic" results than matching a very specific and
           | idiosyncratic set of requirements. The study involved the
           | latter (specific) situation; helping people learn mainstream
           | material seems more like the former (generic) situation.
           | 
           | (Qualifications: I was a reviewer on the METR study.)
        
           | graerg wrote:
           | People keep citing this study (and it was on the top of HN
           | for a day). But this claim falls flat when you find out that
           | the test subjects had effectively no experience with LLM
           | equipped editors and the 1-2 people in the study that
           | actually _did_ have experience with these tools showed a
           | marked increase in productivity.
           | 
           | Like yeah, if you've only ever used an axe you probably don't
           | know the first thing about how to use a chainsaw, but if you
           | know how to use a chainsaw you're wiping the floor with the
           | axe wielders. Wholeheartedly agree with the rest of your
           | comment; even if you're slow you lap everyone sitting on the
           | couch.
        
           | bretpiatt wrote:
           | *slower with Sonnet 3.7 on large open source code bases where
           | the developer is a senior member of the project core team.
           | 
           | https://metr.org/blog/2025-07-10-early-2025-ai-
           | experienced-o...
           | 
           | I believe we'll see the benefits and drawbacks of AI
           | augmentation to humans performing various tasks will vary
           | wildly based on the task, the way the AI is being asked to
           | interact, and the AI model.
        
           | SkyPuncher wrote:
           | The study you're referencing doesn't make that conclusion.
           | 
           | It concludes theres a learning curve that generally takes
           | about 50 hours of time to figure out. The data shows that the
           | one engineer who had more than 50 hours of experience with
           | Cursor actually worked faster.
           | 
           | This is largely my experience, now. I was much slower
           | initially, but I've now figured out the correct way to
           | prompt, guide, and fix the LLM to be effective. I produce way
           | more code and am mentally less fatigued at the end of each
           | day.
        
           | daedrdev wrote:
           | It was a 16 person study on open source devs that found 50
           | hours of experience with the tool made people more productive
        
       | LeftHandPath wrote:
       | Interesting. I don't use GPT for code but I have been using it to
       | grade answers to behavioral and system design interview
       | questions, lately. Sometimes it hallucinates, but the gists are
       | usually correct.
       | 
       | I would not use it if it was for something with a strictly
       | correct answer.
        
       | micromacrofoot wrote:
       | I'm not sure about the audience for this, if you're already
       | _willing_ to learn the material you probably already engage with
       | AI in a way that isn 't "please output the answers for me"
       | because you're likely self-aware enough to know that "answering"
       | isn't always "understanding." Maybe this mode makes that a little
       | easier? but I doubt it's significant
       | 
       | If you're the other 90% of students that are only learning to
       | check the boxes and get through the courses to get the
       | qualification at the end... are you going to bother using this?
       | 
       | Of course, maybe this is "see, we're not _trying_ to kill
       | education... promise! "
        
         | LordDragonfang wrote:
         | I mean, it's about context, isn't it?
         | 
         | Just like it's easier to be productive if you have a separate
         | home office and couch, because of the differing psychological
         | contexts, it's easier if you have a separate context for "just
         | give me answers" and "actually teach me the thing".
         | 
         | Also, I don't know about you, but (as a professional) even
         | though I actively try to learn the principals behind the code
         | generated, I don't always want to spend the effort prompting
         | the model away from the "just give me results with a simple
         | explanation" personality I've cultivated. It'd be nice having a
         | mode with that work done for me.
        
         | _hao wrote:
         | I think as with everything related to learning if you're
         | conscientious and studious this will be a major boost (no idea,
         | but I plan on trying it out tonight on some math I've been
         | studying). And likewise if you just use it to do your homework
         | without putting in the effort you won't see any benefit or
         | actively degrade.
        
       | SoftTalker wrote:
       | Modern day Cliff's Notes.
       | 
       | There is no way to learn without effort. I understand they are
       | not claiming this, but many students want a silver bullet. There
       | isn't one.
        
         | CobrastanJorji wrote:
         | But tutors are fine. The video is suggesting that this is an
         | attempt to automate a tutor, not replace Cliff's Notes. Whether
         | it succeeds, I have no idea.
        
           | SoftTalker wrote:
           | Good tutors are fine, bad tutors will just give you the
           | answer. Many students think the bad tutors are good ones.
        
             | CobrastanJorji wrote:
             | Yep, this is a marketing problem. Your users' goal is to
             | learn, but they also want to expend as little effort as
             | possible. They'll love it if you just tell them the
             | answers, but you're also doing them a disservice by doing
             | so.
             | 
             | Same problem exists for all educational apps. Duolingo
             | users have the goal of learning a language, but also they
             | only want to use Duolingo for a few minutes a day, but also
             | they want to feel like they're making progress. Duolingo's
             | goal is to keep you using Duolingo, and if possible it'd be
             | good for you to learn the language, but their #1 goal is to
             | keep you coming back. Oddly, Duolingo might not even be
             | wrong to focus primariliy on keeping you moving forward,
             | given how many people give up when learning a new language.
        
             | LordDragonfang wrote:
             | > Today we're introducing study mode in ChatGPT--a learning
             | experience that helps you work through problems step by
             | step instead of just getting an answer.
             | 
             | So, unless you have experience with this products that
             | contradicts their claims, it's a good tutor by your
             | definition.
        
         | sejje wrote:
         | Cliff notes with a near-infinite zoom feature.
         | 
         | The criticism of cliff's notes is generally that it's a
         | superficial glance. It can't go deeper, it's basically a
         | summary.
         | 
         | The LLM is not that. It can zoom in and out of a topic.
         | 
         | I think it's a poor criticism.
         | 
         | I don't think it's a silver bullet for learning, but it's a
         | unified, consistent interface across topics and courses.
        
           | gmanley wrote:
           | Except it generally is shallow, for any advanced enough
           | subject, and the scary part is you don't know when it's
           | reached the limit of its knowledge because it'll come up with
           | some hallucination to fill in those blanks.
           | 
           | If LLM's got better at just responding with: "I don't know",
           | I'd have less of an issue.
        
             | sejje wrote:
             | I agree, but it's a known limitation. I've been duped a
             | couple times, but I mostly can tell when it's full of shit.
             | 
             | Some topics you learn to beware and double check. Or ask it
             | to cite sources. (For me, that's car repair. It's wrong a
             | lot.)
             | 
             | I wish it had some kind of confidence level assessment or
             | ability to realize it doesn't know, and I think it
             | eventually will have that. Most humans I know are also very
             | bad at that.
        
           | probably_wrong wrote:
           | > _It can zoom in and out of a topic._
           | 
           | Sure, but only as long as you're not terribly concerned with
           | the result being accurate, like that old reconstruction of
           | Obama's face from a pixelated version [1] but this time about
           | a topic for which one is, by definition, not capable of
           | identifying whether the answer is correct.
           | 
           | [1] https://www.theverge.com/21298762/face-depixelizer-ai-
           | machin...
        
             | sejje wrote:
             | I'm capable of asking it a couple of times about the same
             | thing.
             | 
             | It's unlikely to make up the same bullshit twice.
             | 
             | Usually exploring a topic in depth finds these issues
             | pretty quickly.
        
       | raincole wrote:
       | If current AI is good enough to teach you something, spending
       | time learning that thing seems to be a really bad investment...
        
         | esafak wrote:
         | How does that make sense? So you'd learn it if it was bad at
         | teaching? Do you apply the same principle with humans and not
         | bother to learn if the teacher is good?
        
           | ted537 wrote:
           | Your teacher can't operate in millions of locations at once
           | for super cheap
        
       | toisanji wrote:
       | I truly believe AI will change all of education for the better,
       | but of course it can also hinder learning if used improperly.
       | Those who want to genuinely learn will learn while those looking
       | for shortcuts will cause more harm to themselves. I just did a
       | show HN today about something semi related.
       | 
       | I made A deep research assistant for families. Children can ask
       | questions to explain difficult concepts and for parents to ask
       | how to deal with any parenting situation. For example a 4 year
       | old may ask "why does the plate break when it falls?"
       | 
       | example output: https://www.studyturtle.com/ask/PJ24GoWQ-pizza-
       | sibling-fight...
       | 
       | app: https://www.studyturtle.com/ask/
       | 
       | Show HN: https://news.ycombinator.com/item?id=44723280
        
         | ujkhsjkdhf234 wrote:
         | I think research and the ability to summarize are important
         | skills and automating these skills away will have bad
         | downstream effects. I see people on Twitter asking grok to
         | summarize a paragraph so I don't think further cementing this
         | idea that a tool will summarize for you is a good idea.
        
         | devmor wrote:
         | Do you genuinely have any non-anecdotal reason to believe that
         | AI will improve education, or is it just hope?
         | 
         | I ask because every serious study on using modern generative AI
         | tools tends to conclude fairly immediate and measurable
         | deleterious effects on cognitive ability.
        
           | toisanji wrote:
           | Every technology can be good or bad to an individual
           | depending on how they use it. It is up to the user to decide
           | how they will use the tool. For people who are really looking
           | to learn a topic and understand in detail, then I think it
           | can really help them to grasp the concepts.
        
       | czhu12 wrote:
       | I'll personally attest: LLM's have been absolutely incredible to
       | self learn new things post graduation. It used to be that if you
       | got stuck on a concept, you're basically screwed. Unless it was
       | common enough to show up in a well formed question on stack
       | exchange, it was pretty much impossible, and the only thing you
       | can really do is keep paving forward and hope at some point,
       | it'll make sense to you.
       | 
       | Now, everyone basically has a personal TA, ready to go at all
       | hours of the day.
       | 
       | I get the commentary that it makes learning too easy or shallow,
       | but I doubt anyone would think that college students would learn
       | better if we got rid of TA's.
        
         | eternauta3k wrote:
         | You can always ask in stack exchange, IRC or forums.
        
           | wiseowise wrote:
           | Closed: duplicate
           | 
           | Closed: RTFM, dumbass
           | 
           | <No activity for 8 years, until some random person shows up
           | and asks "Hey did you figure it out?">
        
             | dizhn wrote:
             | "Nevermind I figured it out"
        
             | FredPret wrote:
             | Or even worse, you ask an "xyz" question in the "xyz"
             | StackExchange, then immediately get flagged as off-topic
        
             | atoav wrote:
             | My favourite moment was when I tried to figure a specific
             | software issue out that had to do with obscure hardware and
             | after hours I found one forum post detailing the solution
             | with zero replies. And it turns out I wrote it myself,
             | years prior and had forgotten about it.
        
               | QuercusMax wrote:
               | I had a similar experience involving something dealing
               | with RSA encryption on iOS.
        
               | sejje wrote:
               | I googled a command line string to do XYZ thing once, and
               | found my own blog post.
               | 
               | I really do write that stuff for myself, turns out.
        
           | Rooster61 wrote:
           | On IRC> Newb: I need help with <thing>. Does anyone have any
           | experience with this?
           | 
           | J. Random Hacker: Why are you doing it like that?
           | 
           | Newb: I have <xyz> constraint in my case that necessitates
           | this.
           | 
           | J. Random Hacker: This is a stupid way to do it. I'm not
           | going to help you.
        
           | precompute wrote:
           | This is the way to go.
        
         | threetonesun wrote:
         | > the only thing you can really do is keep paving forward and
         | hope at some point, it'll make sense to you.
         | 
         | I find it odd that someone who has been to college would see
         | this as a _bad_ way to learn something.
        
           | abeppu wrote:
           | In college sometimes asking the right question in class or in
           | a discussion section led by a graduate student or in a study
           | group would help me understand something. Sometimes comments
           | from a grader on a paper would point out something I had
           | missed. While having the diligence to keep at it until you
           | understand is valuable, the advantage of college over just a
           | pile of textbooks is in part that there are other resources
           | that can help you learn.
        
           | czhu12 wrote:
           | The main difference in college was that there were office
           | hours
        
           | qualeed wrote:
           | "Keep paving forward" can sometimes be fruitful, and at other
           | times be an absolutely massive waste of time.
           | 
           | I'm not sold on LLMs being a replacement, but post-secondary
           | was certainly enriched by having other people to ask
           | questions to, people to bounce ideas off of, people that can
           | say "that was done 15 years ago, check out X", etc.
           | 
           | There were times where I thought I had a great idea, but it
           | was based on an incorrect conclusion that I had come to. It
           | was helpful for that to be pointed out to me. I could have
           | spent many months "paving forward", to no benefit, but
           | instead someone saved me from banging my head on a wall.
        
           | BeetleB wrote:
           | Imagine you're in college, have to learn calculus, and you
           | can't afford a textbook (nor can find a free one), and the
           | professor has a thick accent and makes many mistakes.
           | 
           | Sure, you could pave forward, but realistically, you'll get
           | much farther with either a good textbook or a good teacher,
           | or both.
        
           | IshKebab wrote:
           | In college you can ask people who know the answer. It's not
           | until PhD level that you have to struggle without readily
           | available answers.
        
         | adamsb6 wrote:
         | When ChatGPT came out it was like I had the old Google back.
         | 
         | Learning a new programming language used to be mediated with
         | lots of useful trips to Google to understand how some
         | particular bit worked, but Google stopped being useful for that
         | years ago. Even if the content you're looking for exists, it's
         | buried.
        
           | GaggiX wrote:
           | And the old ChatGPT was nothing compared to what we have
           | today, nowadays reasoning models will eat through math
           | problems no problem when this was a major limitation in the
           | past.
        
             | jennyholzer wrote:
             | I don't buy it. Open AI doesn't come close to passing my
             | credibility check. I don't believe their metrics.
        
               | brulard wrote:
               | You don't have to. Just try it yourself.
        
               | GaggiX wrote:
               | OpenAI is not the only company making LLMs, there are
               | plenty now, you can use Gemini 2.5 Pro for example. And
               | of course you can just try a SOTA model like Gemini 2.5
               | Pro for free, you don't have to trust anything.
        
         | holsta wrote:
         | > It used to be that if you got stuck on a concept, you're
         | basically screwed.
         | 
         | We were able to learn before LLMs.
         | 
         | Libraries are not a new thing. FidoNet, USENET, IRC, forums,
         | local study/user groups. You have access to all of Wikipedia.
         | Offline, if you want.
        
           | sejje wrote:
           | I learned how to code using the library in the 90s.
           | 
           | I think it's accurate to say that if I had to do that again,
           | I'm basically screwed.
           | 
           | Asking the LLM is a vastly superior experience.
           | 
           | I had to learn what my local library had, not what I wanted.
           | And it was an incredible slog.
           | 
           | IRC groups is another example--I've been there. One or two
           | topics have great IRC channels. The rest have idle bots and
           | hostile gatekeepers.
           | 
           | The LLM makes a happy path to most topics, not just a couple.
        
             | no_wizard wrote:
             | >Asking the LLM is a vastly superior experience.
             | 
             | Not to be overly argumentative, but I disagree, if you're
             | looking for a deep and ongoing process, LLMs fall down,
             | because they can't _remember_ anything and can 't build
             | upon itself in that way. You end up having to repeat alot
             | of stuff. They also don't have good course correction (that
             | is, if you're going down the wrong path, it doesn't alert
             | you, as I've experienced)
             | 
             | It also can give you really bad content depending on what
             | you're trying to learn.
             | 
             | I think for things that represent themselves as a form of
             | highly structured data, like programming languages, there's
             | good attunement there, but you start talking about trying
             | to dig around about advanced finance, political topics,
             | economics, or complex medical conditions the quality falls
             | off fast, if its there at all
        
               | sejje wrote:
               | I used llms to teach me a programming language recently.
               | 
               | It was way nicer than a book.
               | 
               | That's the experience I'm speaking from. It wasn't
               | perfect, and it was wrong sometimes, sure. A known
               | limitation.
               | 
               | But it was flexible, and it was able to do things like
               | relate ideas with programming languages I already knew.
               | Adapt to my level of understanding. Skip stuff I didn't
               | need.
               | 
               | Incorrect moments or not, the result was i learned
               | something quickly and easily. That isn't what happened in
               | the 90s.
        
               | dcbb65b2bcb6e6a wrote:
               | > and it was wrong sometimes, sure. A known limitation.
               | 
               | But that's the entire problem and I don't understand why
               | it's just put aside like that. LLMs are wrong sometimes,
               | and they often just don't give you the details and, in my
               | opinion, knowing about certain details and traps of a
               | language is very very important, if you plan on doing
               | more with it than just having fun. Now someone will come
               | around the corner and say 'but but but it gives you the
               | details if you explicitly ask for them'. Yes, of course,
               | but you just don't know where important details are
               | hidden, if you are just learning about it. Studying is
               | hard and it takes perseverance. Most textbooks will tell
               | you the same things, but they all still differ and every
               | author usually has a few distinct details they highlight
               | and these are the important bits that you just won't get
               | with an LLM
        
               | sejje wrote:
               | It's not my experience that there are missing pieces as
               | compared to anything else.
               | 
               | Nobody can write an exhaustive tome and explore every
               | feature, use, problem, and pitfall of Python, for
               | example. Every text on the topic will omit something.
               | 
               | It's hardly a criticism. I don't want exhaustive.
               | 
               | The llm taught me what I asked it to teach me. That's
               | what I hope it will do, not try to caution me about
               | everything I could do wrong with a language. That list
               | might be infinite.
        
               | ZYbCRq22HbJ2y7 wrote:
               | > It's not my experience that there are missing pieces as
               | compared to anything else.
               | 
               | How can you know this when you are learning something? It
               | seems like a confirmation bias to even have this opinion?
        
               | refulgentis wrote:
               | I'd gently point out we're 4 questions into "what about
               | if you went about it stupidly and actually learned
               | nothing?"
               | 
               | It's entirely possible they learned nothing and they're
               | missing huge parts.
               | 
               | But we're sort of at the point where in order to ignore
               | their self-reported experience, we're asking
               | philosophical questions that amount to "how can you know
               | you know if you don't know what you don't know and
               | definitely don't know everything?"
               | 
               | More existentialism than interlocution.
               | 
               | If we decide our interlocutor can't be relied upon, what
               | is discussion?
               | 
               | Would we have the same question if they said they did it
               | from a book?
               | 
               | If they did do it from a book, how would we know if the
               | book they read was missing something that we thought was
               | crucial?
        
               | ZYbCRq22HbJ2y7 wrote:
               | I didn't think that was what was being discussed.
               | 
               | I was attempting to imply that with high-quality
               | literature, it is often reviewed by humans who have some
               | sort of knowledge about a particular topic or are willing
               | to cross reference it with existing literature. The
               | reader often does this as well.
               | 
               | For low-effort literature, this is often not the case,
               | and can lead to things like
               | https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect
               | where a trained observer can point out that something is
               | wrong, but an untrained observer cannot perceive what is
               | incorrect.
               | 
               | IMO, this is adjacent to what human agents interacting
               | with language models experience often. It isn't wrong
               | about everything, but the nuance is enough to introduce
               | some poor underlying thought patterns while learning.
        
               | ayewo wrote:
               | That's easy. It's due to a psychological concept called:
               | transfer of learning [0].
               | 
               | Perhaps the most famous example of this is Warren Buffet.
               | For years Buffet missed out on returns from the tech
               | industry [1] because he avoided investing in _tech
               | company_ stocks due to Berkshire 's long standing
               | philosophy to never invest in companies whose business
               | model he doesn't understand.
               | 
               | His light bulb moment came when he used his understanding
               | of a business he understood really well i.e. their
               | furniture business [3] to value Apple as a _consumer
               | company_ rather than as a _tech company_ leading to a
               | $1bn position in Apple in 2016 [2].
               | 
               | [0] https://en.wikipedia.org/wiki/Transfer_of_learning
               | 
               | [1] https://news.ycombinator.com/item?id=33612228
               | 
               | [2] https://www.theguardian.com/technology/2016/may/16/wa
               | rren-bu...
               | 
               | [3] https://www.cnbc.com/2017/05/08/billionaire-investor-
               | warren-...
        
               | dcbb65b2bcb6e6a wrote:
               | You are right and that's my point. To me it just feels
               | like that too many people think LLMs are the holy grail
               | for learning. No, you still have to study a lot. Yes, it
               | can be easier than it was.
        
               | gbalduzzi wrote:
               | Your other responses kinda imply that you believe LLMs
               | are not good for learning.
               | 
               | That's totally different than saying they are not
               | flawless but they make learning easier than other
               | methods, like you did in this comment
        
               | smokel wrote:
               | Most LLM user interfaces, such as ChatGPT, do have a
               | memory. See _Settings, Personalization, Manage Memories_.
        
           | gertlex wrote:
           | Agreed, I'd add to the statement, "you're basically screwed,
           | often, without investing a ton of time (e.g. weekends)"
           | 
           | Figuring out 'make' errors when I was bad at C on
           | microcontrollers a decade ago? (still am) Careful pondering
           | of possible meanings of words... trial and error tweaks of
           | code and recompiling in hopes that I was just off by a tiny
           | thing, but 2 hours later and 30 attempts later, and realizing
           | I'd done a bad job of tracking what I'd tried and hadn't?
           | Well, made me better at being careful at triaging issues. But
           | it wasn't something I was enthusiastic to pick back up the
           | next weekend, or for the next idea I had.
           | 
           | Revisiting that combination of hardware/code a decade later
           | and having it go much faster with ChatGPT... that was fun.
        
           | Xenoamorphous wrote:
           | It's not an or/either situation.
        
           | gbalduzzi wrote:
           | Are we really comparing this research to just writing and
           | having a good answer in a couple of seconds?
           | 
           | Like, I agree with you and I believe those things will resist
           | and will always be important, but it doesn't really compare
           | in this case.
           | 
           | Last week I was in the nature and I saw a cute bird that I
           | didn't know. I asked an AI and got the correct answer in 10
           | seconds. Of course I would find the answer at the library or
           | by looking at proper niche sites, but I would not have done
           | it because I simply didn't care that much. It's a stupid
           | example but I hope it makes the point
        
             | holsta wrote:
             | There's a gigantic difference between outsourcing your
             | brain to generative AI (LLMs, Stable Diffusion, ..) and
             | pattern recognition that recognises songs, birds, plants or
             | health issues.
        
           | BeetleB wrote:
           | > We were able to learn before LLMs.
           | 
           | We were able to learn before the invention of writing, too!
        
         | dcbb65b2bcb6e6a wrote:
         | > LLM's have been absolutely incredible to self learn new
         | things post graduation.
         | 
         | I haven't tested them on many things. But in the past 3 weeks I
         | tried to vibe code a little bit VHDL. On the one hand it was a
         | fun journey, I could experiment a lot and just iterated fast.
         | But if I was someone who had no idea about hardware design,
         | then this trash would've guided me the wrong way in numerous
         | situations. I can't even count how many times it has built me
         | latches instead of clocked registers (latches bad, if you don't
         | know about it) and that's just one thing. Yes I know there
         | ain't much out there (compared to python and javascript) about
         | HDLs, even less regarding VHDL. But damn, no no no. Not for
         | learning. never. If you know what you're doing and you have
         | some fundamental knowledge about the topic, then it might help
         | to get further, but not for the absolute essentials, that will
         | backfire hard.
        
           | avn2109 wrote:
           | LLM's are useful because they can recommend several
           | famous/well-known books (or even chapters of books) that are
           | relevant to a particular topic. Then you can also use the LLM
           | to illuminate the inevitable points of confusion and
           | shortcomings in those books while you're reading and
           | synthesizing them.
           | 
           | Pre-LLM, even finding the ~5 textbooks with ~3 chapters each
           | that decently covered the material I want was itself a
           | nontrivial problem. Now that problem is greatly eased.
        
             | ZYbCRq22HbJ2y7 wrote:
             | > they can recommend several famous/well-known books
             | 
             | They can recommend many unknown books as well, as language
             | models are known to reference resources that do not exist.
        
               | nilamo wrote:
               | And then when you don't find it, you move onto the next
               | book. Problem solved!
        
               | jennyholzer wrote:
               | I strongly prefer curated recommendations from a person
               | with some sort of credibility in a subject area that
               | interests me.
        
         | no_wizard wrote:
         | >Now, everyone basically has a personal TA, ready to go at all
         | hours of the day
         | 
         | This simply hasn't been my experience.
         | 
         | Its too shallow. The deeper I go, the less it seems to be
         | useful. This happens _quick_ for me.
         | 
         | Also, god forbid you're researching a complex and possibly
         | controversial subject and you want it to find _reputable
         | sources_ or particularly academic ones.
        
           | kianN wrote:
           | I built a public tool a while back for some of my friends in
           | grad school to support this sort of deep academic research
           | use case. Sharing in case it is helpful: https://sturdystatis
           | tics.com/deepdive?search_type=external&q...
        
           | ACCount36 wrote:
           | It is shallow. But as long as what you're asking it of is the
           | kind of material covered in high school or college, it's
           | fairly reliable.
           | 
           | This generation of AI doesn't yet have the knowledge depth of
           | a seasoned university professor. It's the kind of teacher
           | that you should, eventually, surpass.
        
           | scarmig wrote:
           | I've found it excels at some things:
           | 
           | 1) The broad overview of a topic
           | 
           | 2) When I have a vague idea, it helps me narrow down the
           | correct terminology for it
           | 
           | 3) Providing examples of a particular category ("are there
           | any examples of where v1 in the visual cortex develops in a
           | disordered way?")
           | 
           | 4) "Tell me the canonical textbooks in field X"
           | 
           | 5) Posing math exercises
           | 
           | 6) Free form branching--while talking about one topic, I want
           | to shift to another that is distinct but related.
           | 
           | I agree they leave a lot to be desired when digging very
           | deeply into a topic. And my biggest pet peeve is when they
           | hallucinate fake references ("tell me papers that investigate
           | this topic" will, for any sufficiently obscure topic, result
           | in a bunch of very promising paper titles that are wholely
           | invented).
        
             | CJefferson wrote:
             | These things are moving so quickly, but I teach a 2nd year
             | combinatorics course, and about 3 months ago I tried th
             | latest chatGPT and Deepseek -- they could answer very
             | standard questions, but were wrong for more advanced
             | questions, but often in quite subtle ways. I actually set a
             | piece of homework "marking" chatGPT, which went well and
             | students seemed to enjoy!
        
               | teaearlgraycold wrote:
               | That's a great idea to both teach the subject and AI
               | skepticism.
        
               | scarmig wrote:
               | Very clever and approachable, and I've been
               | unintentionally giving myself that exercise for awhile
               | now. Who knows how long it will remain viable, though.
        
               | jennyholzer wrote:
               | that's a cool assignment!
        
               | p1esk wrote:
               | When you say the latest chatGPT, do you mean o3?
        
               | Julien_r2 wrote:
               | Super good idea!!
               | 
               | Luc Julia (one of the main Siri's creators) describe a
               | very similar exercice in this interview [0](It's in
               | french, although the au translation isn't too bad)
               | 
               | The gist of it, is that he describes this exercice he
               | does with his students, where they ask chatgpt about
               | Victor Hugo's biography, and then proceed to spot the
               | errors made by Chatgtp.
               | 
               | This setup is simple, but there are very interesting
               | mechanisms in place. The student get to learn about
               | challenging facts, do fact checking, cross reference,
               | etc. While also asserting the reference figure of the
               | teacher, with the knowledge to take down chat gpt.
               | 
               | Well done :)
               | 
               | Edit: adding link
               | 
               | [0] https://youtube.com/shorts/SlyUvvbzRPc?si=2Fv-KIgls-
               | uxr_3z
        
               | resize2996 wrote:
               | forgot the link :)
        
               | Julien_r2 wrote:
               | Arf seems I'm one of those :).. thanks for the heads up!
        
               | ai_viewz wrote:
               | this is amazing strategy
        
             | andy_ppp wrote:
             | I've found the AI is particularly good at explaining AI,
             | better than quite a lot of other coding tasks.
        
             | narcraft wrote:
             | I find 2 invaluable for enhancing search, and combined with
             | 1 & 4, it's a huge boost to self-learning.
        
             | bryanrasmussen wrote:
             | >When I have a vague idea, it helps me narrow down the
             | correct terminology for it
             | 
             | so the opposite of Stack Overflow really, where if you have
             | a vague idea your question gets deleted and you get
             | reprimanded.
             | 
             | Maybe Stack Overflow could use AI for this, help you
             | formulate a question in the way they want.
        
               | scarmig wrote:
               | Maybe. But, it's been over a year since I used
               | StackOverflow, primarily because of LLMs. Sure, I could
               | use an LLM to formulate a question that passes SO's
               | muster. But why bother, when the LLM can almost certainly
               | answer the question as well; SO will be slower; and
               | there's a decent chance that my question will be marked
               | as a duplicate (because it pattern matches to a similar
               | but distinct question).
        
           | SLWW wrote:
           | My core problem with LLMs is as you say; it's good for some
           | simpler concepts, tasks, etc. but when you need to dive into
           | more complex topics it will oversimplify, give you what you
           | didn't ask for, or straight up lie by omission.
           | 
           | History is a great example, if you ask an LLM about a vaguely
           | difficult period in history it will just give you one side
           | and act like the other doesn't exist, or if there is another
           | side, it will paint them in a very negative light which often
           | is poorly substantiated; people don't just wake up and decide
           | one day to be irrationally evil with no reason, if you
           | believe that then you are a fool... although LLMs would agree
           | with you more times than not since it's convenient.
           | 
           | The result of these things is a form of gatekeeping, give it
           | a few years and basic knowledge will be almost impossible to
           | find if it is deemed "not useful" whether that's an outdated
           | technology that the LLM doesn't seem talked about very much
           | anymore or a ideological issue that doesn't fall in line with
           | TOS or common consensus.
        
             | pengstrom wrote:
             | The part about history perspectives sounds interesting. I
             | haven't noticed this. Please post any concrete/specific
             | examples you've encountered!
        
               | SLWW wrote:
               | - Rhodesia (lock step with the racial-first reasoning,
               | underplays Britain's failures to support that which they
               | helped establish; makes the colonists look hateful when
               | they were dealing with terrorists which the British
               | supported)
               | 
               | - Bombing of Dresden, death stats as well as how long the
               | bombing went on for (Arthur Harris is considered a war-
               | criminal to this day for that; LLLMs highlight easily
               | falsifiable claims by Nazi's to justify low estimates
               | without providing much in the way of verifiable claims
               | outside of a select few, questionable, sources. If the
               | low-estimate is to be believed, then it seems absurd that
               | Harris would be considered a war-criminal in light of
               | what crimes we allow today in warfare)
               | 
               | - Ask it about the Crusades, often if forgets the sacking
               | of St. Peter's in Rome around 846 AD, usually painting
               | the Papacy as a needlessly hateful and violent people
               | during that specific Crusade. Which was horrible, bloody
               | as well as immensely destructive (I don't defend the
               | Crusades), but paints the Islamic forces as victims,
               | which they were eventually, but not at the beginning, at
               | the beginning they were the aggressors bent on invading
               | Rome.
               | 
               | - Ask it about the Six-Day War (1967) and contrast that
               | with several different sources on both sides and you'll
               | see a different portrayal even by those who supported the
               | actions taken.
               | 
               | These are just the four that come to my memory at this
               | time.
               | 
               | Most LLMs seem cagey about these topics; I believe this
               | is due to an accepted notion that anything that could
               | "justify" hatred or dislike of a people group or class
               | that is in favor -- according to modern politics -- will
               | be classified as hateful rhetoric, which is then omitted
               | from the record. The issue lies in the fact that to
               | understand history, we need to understand what happened,
               | not how it is perceived, politically, after the fact.
               | History helps inform us about the issues of today, and it
               | is important, above all other agendas, to represent the
               | truth of history, keeping an accurate account (or simply
               | allowing others to read differing accounts without heavy
               | bias).
               | 
               | LLMs are restricted in this way quite egregiously; "those
               | who do not study history are doomed to repeat it", but if
               | this continues, no one will have the ability to know
               | history and are therefore forced to repeat it.
        
               | Q_is_4_Quantum wrote:
               | This was interesting thanks - makes me wish I had the
               | time to study your examples. But of course I don't,
               | without just turning to an LLM....
               | 
               | If for any of these topics you _do_ manage to get a
               | summary you 'd agree with from a (future or better-
               | prompted?) LLM I'd like to read it. Particularly the
               | first and third, the second is somewhat familiar and the
               | fourth was a bit vague.
        
               | mwigdahl wrote:
               | If someone has Grok 4 access I'd be interested to see if
               | it's less likely to avoid these specific issues.
        
               | pyuser583 wrote:
               | > Ask it about the Crusades, often if forgets the sacking
               | of St. Peter's in Rome around 846 AD, usually painting
               | the Papacy as a needlessly hateful and violent people
               | during that specific Crusade. Which was horrible, bloody
               | as well as immensely destructive (I don't defend the
               | Crusades), but paints the Islamic forces as victims,
               | which they were eventually, but not at the beginning, at
               | the beginning they were the aggressors bent on invading
               | Rome.
               | 
               | I don't know a lot about the other things you mentioned,
               | but the concept of crusading did not exist (in
               | Christianity) in 846 AD. It's not any conflict between
               | Muslims and Christians.
        
               | SLWW wrote:
               | The crusades were predicated on historic tensions between
               | Rome and the Arabs. Which is why I mention that, while
               | the First Crusade proper was in 1096, it's core reasoning
               | were situations like the Sacking of St. Peters which is
               | considered by historians to be one of the most
               | influential moments and often was used as a justification
               | as there was a history of incompatibilities between Rome
               | and the Muslims.
               | 
               | Further leading to the Papacy furthering such efforts in
               | the upcoming years, as they were in Rome and made strong
               | efforts to maintain Catholicism within those boundaries.
               | Crusading didn't appear out of nothing; it required a
               | catalyst for the behavior, like what i listed, is usually
               | a common suspect.
        
               | cthalupa wrote:
               | Why should we consider something that happened 250 years
               | prior as some sort of affirmative defense of the Crusades
               | as having been something that started with the Islamic
               | world being the aggressors?
               | 
               | If the US were to start invading Axis countries with WW2
               | being the justification we'd of course be the aggressors,
               | and that was less than 100 years ago.
        
               | scarmig wrote:
               | Because it played a role in forming the motivations of
               | the Crusaders? It's not about justifying the Crusades,
               | but understanding why they happened.
               | 
               | Similarly, it helps us understand all the examples of
               | today of resentments and grudges over events that
               | happened over a century ago that still motivate people
               | politically.
        
               | rawgabbit wrote:
               | He's referring to the Arab sack of St. Peters.
               | https://en.wikipedia.org/wiki/Arab_raid_against_Rome
        
               | cthalupa wrote:
               | His point is that this was not part of the crusades, not
               | that he was unaware of his happening.
        
               | wahnfrieden wrote:
               | You call out that you don't defend the crusades but are
               | you supportive of Rhodesia?
        
               | SLWW wrote:
               | I only highlighted that I'm not in support of the
               | crusades since it sounds like i might be by my comments.
               | I was highlighting that they didn't just lash out with no
               | cause to start their holy war.
               | 
               | Rhodesia is a hard one; since the more I learn about it
               | the more I feel terrible for both sides; I also do not
               | support terrorism against a nation even if I believe they
               | might not be in the right. However i hold by my disdain
               | for how the British responded/withdrew from them
               | effectively doomed Rhodesia making peaceful resolution
               | essentially impossible.
        
               | jamiek88 wrote:
               | Arthur Harris is in no way considered a war criminal by
               | the vast majority of British people for the record.
               | 
               | It's a very controversial opinion and stating as a just
               | so fact needs challenging.
        
               | SLWW wrote:
               | Do you have references or corroborating evidence?
               | 
               | In 1992 a statue was erected to Harris in London, it was
               | under 24 hour surveillance for several months due to
               | protesting and vandalism attempts. I'm only mentioning
               | this to highlight that there was quite a bit of push back
               | specifically calling the gov out on a tribute to him;
               | which usually doesn't happen if the person was well
               | liked... not as an attempted killshot.
               | 
               | Even the RAF themselves state that there was quite a few
               | who were critical on the first page of their assessment
               | of Arthur Harris https://www.raf.mod.uk/what-we-
               | do/centre-for-air-and-space-p...
               | 
               | Which is funny and an odd thing to say if you are widely
               | loved/unquestioned by your people. Again just another
               | occurrence of language from those who are on his side
               | reinforcing the idea that there is, as you say is "very
               | controversial", and maybe not a "vast majority" since
               | those two things seem at odds with each other.
               | 
               | Not to mention that Harris targeted civilians, which is
               | generally considered behavior of a war-criminal.
               | 
               | As an aside this talk page is a good laugh. https://en.wi
               | kipedia.org/wiki/Talk:Arthur_Harris/Archive_1
               | 
               | Although you are correct I should have used more accurate
               | language instead of saying "considered" I should have
               | said "considered by some".
        
               | pizzafeelsright wrote:
               | You are born in your country. You love your family. A
               | foreign country invades you. Your country needs you. Your
               | faith says to obey the government. Commendable and noble
               | except for a few countries, depending upon the year.
               | 
               | Why?
        
             | scarmig wrote:
             | A few weeks ago I was asking an LLM to offer anti-
             | heliocentric arguments, from the perspective of an
             | intelligent scientist. Although it initially started with
             | what was almost a parody of writing from that period, with
             | some prompting I got it to generate a strong rendition of
             | anti-heliocentric arguments.
             | 
             | (On the other hand, it's very hard to get them to do it for
             | topics that are currently politically charged. Less so for
             | things that aren't in living memory: I've had success
             | getting it to offer the Carthaginian perspective in the
             | Punic Wars.)
        
               | Gracana wrote:
               | Have you tried abliterated models? I'm curious if the
               | current de-censorship methods are effective in that area
               | / at that level.
        
               | SLWW wrote:
               | That's a fun idea; almost having it "play pretend"
               | instead of directly asking it for strong anti-
               | heliocentric arguments outright.
               | 
               | It's weird to see which topics it "thinks" are
               | politically charged vs. others. I've noticed some
               | inconsistency depending on even what years you input into
               | your questions. One year off? It will sometimes give you
               | a more unbiased answer as a result about the year you
               | were actually thinking of.
        
               | scarmig wrote:
               | I think the first thing is figuring out exactly what
               | persona you want the LLM to adopt: if you have only a
               | vague idea of the persona, it will default to the laziest
               | one possible that still could be said to satisfy your
               | request. Once that's done, though, it usually works
               | decently, except for those that the LLM detects are
               | politically charged. (The weakness here is that at some
               | point you've defined the persona so strictly that it's
               | ahistorical and more reflective of your own mental
               | model.)
               | 
               | As for the politically charged topics, I more or less
               | self-censor on those topics (which seem pretty easy to
               | anticipate--none of those you listed in your other
               | comment surprise me at all) and don't bother to ask the
               | LLM. Partially out of self-protection (don't want to be
               | flagged as some kind of bad actor), partially because I
               | know the amount of effort put in isn't going to give a
               | strong result.
        
               | SLWW wrote:
               | > The weakness here is that at some point you've defined
               | the persona so strictly that it's ahistorical and more
               | reflective of your own mental model.
               | 
               | That's a good thing to be aware of, using our own bias to
               | make it more "likely" to play pretend. LLMs tend to be
               | more on the agreeable side; given the unreliable
               | narrators we people tend to be, and the fact that these
               | models are trained on us, it does track that the machine
               | would tend towards preference over fact, especially when
               | the fact could be outside of the LLMs own "Overton
               | Window".
               | 
               | I've started to care less and less about self-censoring
               | as I deem it to be a kind of "use it or lose it"
               | privilege. If you normalize talking about
               | censored/"dangerous" topics in a rational way, more
               | people will be likely to see it not as much of a problem.
               | The other eventuality is that no one hears anything that
               | opposes their view in a rational way but rather only
               | hears from the extremists or those who just want to stick
               | it to the current "bad" in their minds at that moment.
               | Even then though I still will omit certain statements on
               | some topics given the platform, but that's more so that I
               | don't get mislabeled by readers. (one of the items on my
               | other comment was intentionally left as vague as possible
               | for this reason) As for the LLMs, I usually just leave
               | spicy questions for LLMs I can access through an API of
               | someone else (an aggregator) and not a personal acc just
               | to make it a little more difficult to label my activity
               | falsely as a bad actor.
        
               | morgoths_bane wrote:
               | >I've had success getting it to offer the Carthaginian
               | perspective in the Punic Wars.)
               | 
               | That's honestly one of the funniest things I have read on
               | this site.
        
             | jay_kyburz wrote:
             | People _do_ just wake up one day and decide some piece of
             | land should belong to them, or that they don't have enough
             | money and can take yours, or they are just sick of looking
             | at you and want to be rid of you. They will have some
             | excuse or justification, but really they just want more
             | than they have.
             | 
             | People _do_ just wake up and decide to be evil.
        
               | SLWW wrote:
               | A nation that might fit this description may have had
               | their populace indoctrinated (through a widespread
               | political campaign) to believe that the majority of the
               | world throughout history seeks for their destruction.
               | That's a reason for why they think that way, but not
               | because they woke up one day and decided to choose
               | violence.
               | 
               | However not a justification, since I believe that what is
               | happening today is truly evil. Same with another nation
               | who entered a war knowing they'd be crushed, which is
               | suicide; whether that nation is in the right is of little
               | effect if most of their next generation has died.
        
             | neutronicus wrote:
             | History in particular is rapidly approaching post-truth as
             | a knowledge domain anyway.
             | 
             | There's no short-term incentive to ever be right about it
             | (and it's easy to convince yourself of both short-term and
             | long-term incentives, both self-interested and altruistic,
             | to actively lie about it). Like, given the training corpus,
             | could I do a better job? Not sure.
        
               | altcognito wrote:
               | "Post truth". History is a funny topic. It is both
               | critical and irrelevant. Do we really need to know how
               | the founder felt about gun rights? Abortion? Both of
               | these topics were radically different in their day.
               | 
               | All of us need to learn the basics about how to read
               | history and _historians_ critically and to know our the
               | limitations which as you stated probably a tall task.
        
               | andrepd wrote:
               | What are you talking about? In what sense is history done
               | by professional historians degrading in recent times? And
               | what short/long term incentives are you talking about?
               | They are the same as any social science.
        
             | andrepd wrote:
             | > History is a great example, if you ask an LLM about a
             | vaguely difficult period in history it will just give you
             | one side and act like the other doesn't exist, or if there
             | is another side, it will paint them in a very negative
             | light which often is poorly substantiated
             | 
             | Which is why it's so terribly irresponsible to paint these
             | """AI""" systems as impartial or neutral or anything of the
             | sort, as has been done by hypesters and marketers for the
             | past 3 years.
        
           | Xenoamorphous wrote:
           | Can you share some examples?
        
           | tsumnia wrote:
           | IT can be beneficial for making your initial assessment, but
           | you'll need to dig deeper for something meaningful. For
           | example, I recently used Gemini's Deep Research to do some
           | literature review on educational Color Theory in relation to
           | PowerPoint presentations [1]. I know both areas rather well,
           | but I wanted to have some links between the two for some
           | research that I am currently doing.
           | 
           | I'd say that companies like Google and OpenAI are aware of
           | the "reputable" concerns the Internet is expressing and
           | addressing them. This tech is going to be, if not already is,
           | very powerful for education.
           | 
           | [1] http://bit.ly/4mc4UHG
        
             | fakedang wrote:
             | Taking a Gemini Deep Research output and feeding it to
             | NotebookLM to create audio overviews is my current podcast
             | go-to. Sometimes I do a quick Google and add in a few
             | detailed but overly verbose documents or long form YouTube
             | videos, and the result is better than 99% of the podcasts
             | out there, including those by some academics.
        
               | hammyhavoc wrote:
               | No wonder there are so many confident people spouting
               | total rubbish on technical forums.
        
           | EGreg wrote:
           | Try to red team blue team with it
           | 
           | Blue team you throw out concepts and have it steelman them
           | 
           | Red team you can literally throw any kind of stress test at
           | your idea
           | 
           | Alternate like this and you will learn
           | 
           | A great prompt is "give me the top 10 xyz things" and then
           | you can explore
           | 
           | Back when I was in 2006 I used Wikipedia to prepare for job
           | interviews :)
        
           | HPsquared wrote:
           | Human interlocutors have similar issues.
        
           | beambot wrote:
           | The worst is when it's confidently wrong about things...
           | Thankfully, this occurance is becoming less & less common --
           | or at least, it's boundary is beyond my subject matter
           | expertise.
        
           | Teever wrote:
           | It sounds like it is a good tool for getting you up to speed
           | on a subject and you can leverage that newfound familiarity
           | to better search for reputable sources on existing platforms
           | like google scholar or arXiv.
        
           | jjfoooo4 wrote:
           | It's a floor raiser, not a ceiling raiser. It helps you get
           | up to speed on general conventions and consensus on a topic,
           | less so on going deep on controversial or highly specialized
           | topics
        
           | neutronicus wrote:
           | Hmm. I have had pretty productive conversations with ChatGPT
           | about non-linear optimization.
           | 
           | Granted, that's probably well-trodden ground, to which model
           | developers are primed to pay attention, and I'm (a) a
           | relative novice with (b) very strong math skills from another
           | domain (computational physics). So Chuck and I are probably
           | both set up for success.
        
           | II2II wrote:
           | > Also, god forbid you're researching a complex and possibly
           | controversial subject and you want it to find reputable
           | sources or particularly academic ones.
           | 
           | That's fine. Recognize the limits of LLMs and don't use them
           | in those cases.
           | 
           | Yet that is something you should be doing regardless of the
           | source. There are plenty of non-reputable sources in academic
           | libraries and there are plenty of non-reputable sources from
           | professionals in any given field. That is particularly true
           | when dealing with controversial topics or historical sources.
        
           | marcosdumay wrote:
           | > and you want it to find reputable sources
           | 
           | Ask it for sources. The two things where LLMs excel is by
           | filling the sources on some claim you give it (lots will be
           | made up, but there isn't anything better out there) and by
           | giving you queries you can search for some description you
           | give it.
        
             | chrisweekly wrote:
             | Also, Perplexity.ai cites its sources by default.
        
             | golly_ned wrote:
             | It often invents sources. At least for me.
        
           | jlebar wrote:
           | > Its too shallow. The deeper I go, the less it seems to be
           | useful. This happens quick for me.
           | 
           | You must be using a free model like GPT-4o (or the equivalent
           | from another provider)?
           | 
           | I find that o3 is consistently able to go deeper than me in
           | anything I'm a nonexpert in, and usually can keep up with me
           | in those areas where I am an expert.
           | 
           | If that's not the case for you I'd be very curious to see a
           | full conversation transcript (in chatgpt you can share these
           | directly from the UI).
        
           | vonneumannstan wrote:
           | >Its too shallow. The deeper I go, the less it seems to be
           | useful. This happens quick for me.
           | 
           | If its a subject you are just learning how can you possibly
           | evaluate this?
        
             | neutronicus wrote:
             | If you're a math-y person trying to get up to speed in some
             | other math-y field you can discern useless LLM output
             | pretty quickly even as a relative novice.
             | 
             | Falling apart under pointed questioning, saying obviously
             | false things, etc.
        
             | Sharlin wrote:
             | It's easy to recognize that something is wrong if it's
             | wrong enough.
        
           | epolanski wrote:
           | I really think that 90% of such comments come from a lack of
           | knowledge on how to use LLMs for research.
           | 
           | It's not a criticism, the landscape moves fast and it takes
           | time to master and personalize a flow to use an LLM as a
           | research assistant.
           | 
           | Start with something such as NotebookLM.
        
           | kenjackson wrote:
           | "The deeper I go, the less it seems to be useful. This
           | happens quick for me. Also, god forbid you're researching a
           | complex and possibly controversial subject and you want it to
           | find reputable sources or particularly academic ones."
           | 
           | These things also apply to humans. A year or so ago I thought
           | I'd finally learn more about the Israeli/Palestinians
           | conflict. Turns out literally every source that was
           | recommended to me by some reputable source was considered
           | completely non-credible by another reputable one.
           | 
           | That said I've found ChatGPT to be quite good at math and
           | programming and I can go pretty deep at both. I can
           | definitely trip it into mistakes (eg it seems to use
           | calculations to "intuit" its way around sometimes and you can
           | find dev cases where the calls will lead it the wrong
           | directions), but I also know enough to know how to keep it on
           | rails.
        
             | Liftyee wrote:
             | Re: conflicts and politics etc.
             | 
             | I've anecdotally found that real world things like these
             | tend to be nuanced, and that sources (especially on the
             | internet) are disincentivised in various ways from actually
             | showing nuance. This leads to "side-taking" and a lack of
             | "middle-ground" nuanced sources, when the reality lies
             | somewhere in the middle.
             | 
             | Might be linked to the phenomenon where in an environment
             | where people "take sides", those who display moderate
             | opinions are simply ostracized by both sides.
             | 
             | Curious to hear people's thoughts and disagreements on
             | this.
        
               | wahern wrote:
               | I think the Israeli/Palestinian conflict is an example
               | where studying the history is in some sense counter-
               | productive. There's more than a century of atrocities
               | that justify each subsequent reaction; the veritable
               | cycle of violence. And whichever atrocity grabs you first
               | (partly based on present cultural narratives) will color
               | how you perceive everything else.
               | 
               | Moreover, the conflict is unfolding. What matters isn't
               | what happened 100 years ago, or even 50 years ago, but
               | what has happened recently and is happening. A neighbor
               | of mine who recently passed was raised in Israel. Born
               | circa 1946 (there's black & white footage of her as a
               | baby aboard, IIRC, the ship Exodus 1947), she has vivid
               | memories as a child of Palestinian Imams calling out from
               | the mosques to "kill the Jews". She was a beautiful, kind
               | soul who, for example, freely taught adult education to
               | immigrants (of all sorts), but who one time admitted to
               | me that she utterly despised Arabs. That's all you need
               | to know, right there, to understand why Israel is doing
               | what it's doing. Not so much what happened in the past to
               | make people feel that way, but that many Israelis
               | actually, viscerally feel this way today, justifiably or
               | not but in any event rooted in memories and experiences
               | seared into their conscience. Suffice it to say, most
               | Palestinians have similar stories and sentiments of their
               | own, one of the expressions of which was seen on October
               | 7th.
               | 
               | And yet at the same time, after the first few months of
               | the Gaza War she was so disgusted that she said she
               | wanted to renounce her Israeli citizenship. (I don't know
               | how sincere she was in saying this; she died not long
               | after.) And, again, that's all you need to know to see
               | how the conflict can be resolved, if at all; not by
               | understanding and reconciling the history, but merely
               | choosing to stop justifying the violence and moving
               | forward. How the collective action problem might be
               | resolved, within Israeli and Palestinian societies and
               | between them... that's a whole 'nother dilemma.
               | 
               | Using AI/ML to study history is interesting in that it
               | even further removes one from actual human experience.
               | Hearing first hand accounts, even if anecdotal, conveys
               | information you can't acquire from a book; reading a book
               | conveys information and perspective you can't get from a
               | shorter work, like a paper or article; and AI/ML
               | summaries elide and obscure yet more substance.
        
             | 9dev wrote:
             | > Turns out literally every source that was recommended to
             | me by some reputable source was considered completely non-
             | credible by another reputable one.
             | 
             | That's the single most important lesson by the way, that
             | this conflict just has two different, mutually exclusive
             | perspectives, and no objective truth (none that could be
             | recovered FWIW). Either you accept the ambiguity, or you
             | end up siding with one party over the other.
        
               | jonny_eh wrote:
               | > you end up siding with one party over the other
               | 
               | Then as you get more and more familiar you "switch"
               | depending on the sub-issue being discussed, aka nuance
        
               | slt2021 wrote:
               | the truth (aka facts) is objective and facts exist.
               | 
               | The problem is selective memory of these facts, and
               | biased interpretation of those facts, and stretching the
               | truth to fit pre-determined opinion
        
             | jonahx wrote:
             | > learn more about the Israeli/Palestinians
             | 
             | > to be quite good at math and programming
             | 
             | Since LLMs are essentially summarizing relevant content,
             | this makes sense. In "objective" fields like math and CS,
             | the vast majority of content aligns, and LLMs are fantastic
             | at distilling the relevant portions you ask about. When
             | there is no consensus, they can usually _tell_ you that (
             | "this is nuanced topic with many perspectives...", etc),
             | but they can't help you resolve the truth because, from
             | their perspective, the only truth is the content.
        
             | drc500free wrote:
             | Israel / Palestine is a collision between two internally
             | valid and mutually exclusive worldviews. It's kind of a
             | given that there will be two camps who consider the other
             | non-reputable.
             | 
             | FWIW, the /r/AskHistorians booklist is pretty helpful.
             | 
             | https://www.reddit.com/r/AskHistorians/wiki/books/middleeas
             | t...
        
               | andrepd wrote:
               | A human-curated list of human-written books? How
               | delightfully old fashioned!
        
           | prats226 wrote:
           | Can you give a specific example where at certain depth it has
           | stopped becoming useful?
        
           | terabyterex wrote:
           | This can happen if you use the free model and not a paid deep
           | research model. You can use a gpt model and ask things like ,
           | "how many moons does Jupiter have?" But if you want to ask,
           | "can you go on the web a research the affects that chamical a
           | has had on our water supply a cite sources?", you will need
           | to use a deep research model.
        
             | hammyhavoc wrote:
             | Why not do the research yourself rather than risk it
             | misinterpreting? I FAFO'd repeatedly with that, and it is
             | just horribly unreliable.
        
           | jasondigitized wrote:
           | If we have custom trained LLMs per subject doesn't that solve
           | the problem. The shallow problem seems really easy to solve
        
           | gojomo wrote:
           | Grandparent testimony of success, & parent testimony of
           | frustration, are both just wispy random gossip when they
           | don't specify _which_ LLMs delivered the reported
           | experiences.
           | 
           | The quality varies wildly across models & versions.
           | 
           | With humans, the statement "my tutor was great" and "my tutor
           | was awful" reflect very little on "tutoring" in general, and
           | are barely even responses to each other withou more
           | specificity about the quality of tutor involved.
           | 
           | Same with AI models.
        
           | waynesonfire wrote:
           | It's not a doctoral adviser.
        
           | CamperBob2 wrote:
           | What is "it"? Be specific: are you using some obsolete and/or
           | free model? What specific prompt(s) convinced you that there
           | was no way forward?
        
           | melenaboija wrote:
           | I validate models in finance, and this is by far the best
           | tool created for that purpose. I'd compare financial model
           | validation to a Master's level task, where you're working
           | with well established concepts, but at a deep, technical
           | level. LLMs excel at that: ithey understand model
           | assumptions, know what needs to be tested to ensure
           | correctness, and can generate the necessary code and
           | calculations to perform those tests. And finally, they can
           | write the reports.
           | 
           | Model Validation groups are one of the targets for LLMs.
        
           | EchoReflection wrote:
           | I have found that being very specific and asking things like
           | "can you tell me what another perspective might be, such that
           | I can understand potential counter-arguments might be, and
           | how people with other views might see this topic?" can be
           | helpful when dealing with complex/nuanced/contentious
           | subjects. Likewise with regard to "reputable" sources.
        
           | noosphr wrote:
           | This is where feeding in extra context matters. Paste in text
           | that shows up from a google search, textbooks preferred, to
           | get in depth answers.
           | 
           | No one builds multi shot search tools because they eat tokens
           | like no ones business, but I've deployed them internal to a
           | company with rave reviews at the cost of $200 per seat per
           | day.
        
         | lmc wrote:
         | > I'll personally attest: LLM's have been absolutely incredible
         | to self learn new things post graduation.
         | 
         | How do you know when it's bullshitting you though?
        
           | mcmcmc wrote:
           | That's the neat part, you don't!
        
           | jahewson wrote:
           | Same way you know for humans?
        
             | azemetre wrote:
             | But an LLM isn't a human, with a human you can read body
             | language or look up their past body of work. How do you do
             | his with against an LLM
        
               | andix wrote:
               | Many humans tell you bullshit, because they think it's
               | the truth and factually correct. Not so different to
               | LLMs.
        
           | sejje wrote:
           | All the same ways I know when Internet comments, outdated
           | books, superstitions, and other humans are bullshitting me.
           | 
           | Sometimes right away, something sounds wrong. Sometimes when
           | I try to apply the knowledge and discover a problem.
           | Sometimes never, I believe many incorrect things even today.
        
           | nilamo wrote:
           | When you Google the new term it gives you and you get good
           | results, you know it wasn't made up.
           | 
           | Since when was it acceptable to only ever look at a single
           | source?
        
         | ainiriand wrote:
         | I've learnt Rust in 12 weeks with a study plan that ChatGPT
         | designed for me, catering to my needs and encouraging me to
         | take notes and write articles. This way of learning allowed me
         | to publish https://rustaceo.es for Spanish speakers made from
         | my own notes.
         | 
         | I think the potential in this regard is limitless.
        
           | koakuma-chan wrote:
           | I learned Rust in a couple of weeks by reading the book.
        
             | koakuma-chan wrote:
             | But I agree though, I am getting insane value out of LLMs.
        
             | IshKebab wrote:
             | Doubtful. Unless you have very low standards of "learn".
        
               | koakuma-chan wrote:
               | What are your standards of learn?
        
             | paxys wrote:
             | Yeah regardless of time taken the study plan for Rust
             | already exists (https://doc.rust-lang.org/book/). You don't
             | need ChatGPT to regurgitate it to you.
        
           | BeetleB wrote:
           | Now _this_ is a ringing endorsement. Specific stuff you
           | learned, and actual proof of the outcome.
           | 
           | (Only thing missing is the model(s) you used).
        
           | ai_viewz wrote:
           | yes Chat GPT has helped me learn about actix web a framework
           | similar to FastAPI in rust.
        
         | JTbane wrote:
         | Nah I'm calling BS, for me self-learning after college is
         | either Just Do It(tm) trial-and-error, blogs, or hitting the
         | nonfiction section of the library.
        
         | crims0n wrote:
         | I agree... spent last weekend chatting with an LLM, filling in
         | knowledge gaps I had on the electromagnetic spectrum. It does
         | an amazing job educating you on known unknowns, but I think
         | being able to know how to ask the right questions is key. I
         | don't know how it would do with unknown unknowns, which is
         | where I think books really shine and are still a preferable
         | learning method.
        
         | kelthuzad wrote:
         | I share your experience and view in that regard! There is so
         | much criticism of LLMs and some of it is fair, like the problem
         | of hallucinations, but that weakness can be reframed as a
         | learning opportunity. It's like discussing a subject with a
         | personal scientist who may at certain times test you, by making
         | claims that may be simplistic or outright wrong, to keep the
         | student skeptical and check if they are actually paying
         | attention.
         | 
         | This requires a student to be actually interested in what they
         | are learning tho, for others, who blindly trust its output, it
         | can have adverse effects like the illusion of having understood
         | a concept while they might have even mislearned it.
        
         | vrotaru wrote:
         | You should always check. I've seen LLM's being wrong (and
         | obstinate) on topics which are one step separated from common
         | knowledge.
         | 
         | I had to post the source code to win the dispute, so to speak.
        
           | abenga wrote:
           | Why would you try to convince an LLM of anything?
        
             | vrotaru wrote:
             | Well, not exactly convince. I was curious what will happen.
             | 
             | If you are curious it was a question about the behavior of
             | Kafka producer interceptors when an exception is thrown.
             | 
             | But I agree that it is hard to resist the temptation to
             | treat LLM's as a pear.
        
             | layer8 wrote:
             | Often you want to proceed further based on a common
             | understanding, so it's an attempt to establish that common
             | understanding.
        
           | globular-toast wrote:
           | Now think of all the times you didn't already know enough to
           | go and find the real answer.
           | 
           | Ever read mainstream news reporting on something you actually
           | know about? Notice how it's always wrong? I'm sure there's a
           | name for this phenomenon. It sounds like exactly the same
           | thing.
        
         | tekno45 wrote:
         | how are you checking its correctness if you're learning the
         | topic?
        
           | ZYbCRq22HbJ2y7 wrote:
           | This is important, as benchmarks indicate we aren't at a
           | level where a LLM can truly be relied upon to teach topics
           | across the board.
           | 
           | It is hard to verify information that you are unfamiliar
           | with. It would be like learning from a message board. Can you
           | really trust what is being said?
        
             | Eisenstein wrote:
             | What is the solution? Toss out thousands of years of tested
             | pedagogy which shows that most people learn by trying
             | things, asking questions, and working through problems with
             | assistance and instead tell everyone to read a textbook by
             | themselves and learn through osmosis?
             | 
             | So what if the LLM is wrong about something. Human teachers
             | are wrong about things, you are wrong about things, I am
             | wrong about things. We figure it out when it doesn't work
             | the way we thought and adjust our thinking. We aren't
             | learning how to operate experimental nuclear reactors here,
             | where messing up results in half a country getting
             | irradiated. We are learning things for fun, hobbies, and
             | self-betterment.
        
             | qualeed wrote:
             | > _we aren 't at a level where a LLM can truly be relied
             | upon to teach topics across the board._
             | 
             | You can replace "LLM" here with "human" and it remains
             | true.
             | 
             | Anyone who has gone to post-secondary has had a teacher
             | that relied on outdated information, or filled in gaps with
             | their own theories, etc. Dealing with that is a large
             | portion of what "learning" is.
             | 
             | I'm not convinced about the efficacy of LLMs in
             | teaching/studying. But it's foolish to think that humans
             | don't suffer from the same reliability issue as LLMs, at
             | least to a similar degree.
        
           | signatoremo wrote:
           | The same way you check if you learn in any other ways? Cross
           | referencing, asking online, trying it out, etc.
        
           | kelvinjps10 wrote:
           | If it's coding you can compile or test your program. For
           | other things you can go to primary sources
        
         | ZYbCRq22HbJ2y7 wrote:
         | > It used to be that if you got stuck on a concept, you're
         | basically screwed
         | 
         | No, not really.
         | 
         | > Unless it was common enough to show up in a well formed
         | question on stack exchange, it was pretty much impossible, and
         | the only thing you can really do is keep paving forward and
         | hope at some point, it'll make sense to you.
         | 
         | Your experience isn't universal. Some students learned how to
         | do research in school.
        
           | fkyoureadthedoc wrote:
           | They should have focused on social skills too I think
        
           | fn-mote wrote:
           | I do a lot of research and independent learning. The way I
           | translated "screwed" was "4-6 hours to unravel the issue".
           | And half the time the issue is just a misunderstanding.
           | 
           | It's exciting when I discover I can't replicate something
           | that is stated authoritatively... which turns out to be
           | controversial. That's rare, though. I bet ChatGPT knows it's
           | controversial, too, but that wouldn't be as much fun.
        
           | HPsquared wrote:
           | Like a car can be "beyond economical repair", a problem can
           | be not worth the time (and uncertainty) or fixing. Especially
           | from subjective judgement with incomplete information etc
        
           | johnfn wrote:
           | "Screwed" = spending hours sifting through poorly-written,
           | vaguely-related documents to find a needle in a haystack. Why
           | would I want to continue doing that?
        
             | ZYbCRq22HbJ2y7 wrote:
             | > "Screwed" = spending hours sifting through poorly-
             | written, vaguely-related documents to find a needle in a
             | haystack.
             | 
             | From the parent comment:
             | 
             | > it was pretty much impossible ... hope at some point,
             | it'll make sense to you
             | 
             | Not sure where you are getting the additional context for
             | what they meant by "screwed", but I am not seeing it.
        
           | Leynos wrote:
           | As you say, your experience isn't universal, and we all have
           | different modes of learning that work best for us.
        
         | mathattack wrote:
         | I've found LLMs to be great in summarizing non-controversial
         | non-technical bodies of knowledge. For example - the facts in
         | the long swings of regional histories. You have to ask for
         | nuance and countervailing viewpoints, though you'll get them if
         | they're in there.
        
         | Barrin92 wrote:
         | >Unless it was common enough to show up in a well formed
         | question on stack exchange, it was pretty much impossible
         | 
         | sorry but if you've gone to university, in particular at a time
         | when internet access was already ubiquitous, surely you must
         | have been capable to find an answer to a programming problem by
         | consulting documentation, manual, or tutorials which exist on
         | almost any topic.
         | 
         | I'm not saying the chatbot interface is necessarily bad, it
         | might be more engaging, but it literally does not present you
         | with information you couldn't have found yourself.
         | 
         | If someone has a computer science degree and tells me without
         | stack exchange they can't find solutions to basic problems that
         | is a red flag. That's like the article about the people posted
         | here who couldn't program when their LLM credits ran out
        
         | wiz21c wrote:
         | I use it to refresh some engineering maths I have forgotten
         | (ODE, numerical schemas, solving linear equations, data
         | sciences algorithms, etc) and the explanations are most of the
         | time great and usually 2 or 3 prompts give me a good overview
         | and explain the tricky details.
         | 
         | I also use it to remember some python stuff. In rust, it is
         | less good: makes mistakes.
         | 
         | In those two domains, at that level, it's really good.
         | 
         | It could help students I think.
        
         | andix wrote:
         | Absolutely. I used to have a lot of weird IPv6 issues in my
         | home network I didn't understand. ChatGPT helped me to dump
         | some traffic with tcpdump and explained what was happening on
         | the network.
         | 
         | In the process it helped me to learn many details about RA and
         | NDP (Router Advertisments/Neighbor Discovery Protocol, which
         | mostly replace DHCP and ARP from IPv4).
         | 
         | It made me realize that my WiFi mesh routers do quite a lot of
         | things to prevent broadcast loops on the network, and that all
         | my weird issues could be attributed to one cheap mesh repeater.
         | So I replaced it and now everything works like a charm.
         | 
         | I had this setup for 5 years and was never able to figure out
         | what was going on there, although I really tried.
        
           | mvieira38 wrote:
           | Would you say you were using the LLM as a tutor or as tech
           | support, in that instance?
        
             | andix wrote:
             | Probably both. I think ChatGPT wouldn't have found the
             | issue by itself. But I noticed some specific things, asked
             | for some tutoring and then it helped my to find the issues.
             | It was a team effort, either of "us" alone wouldn't have
             | finished the job. ChatGPT had some really wrong ideas in
             | the process.
        
         | bossyTeacher wrote:
         | LLMs are to learning what self driving cars are to
         | transportation. They take you to the destination most of the
         | time. But the problem is that if you use them too much your
         | brain (your legs) undergoes metaphorical atrophy and when you
         | are faced in the position of having to do it on your own, you
         | are worse than you would be had you spent the time using your
         | brain (legs). Learning is great but learning to learn is the
         | real skilset. You don't develop that if you are always getting
         | spoonfed.
        
           | pyman wrote:
           | This is one of the challenges I see with self-driving cars.
           | Driving requires a high level of cognitive processing to
           | handle changing conditions and potential hazards. So when you
           | drive most of your brain is engaged. The impact self-driving
           | cars are going to have on mental stimulation, situational
           | awareness, and even long-term cognitive health could be
           | bigger than we think, especially if people stop engaging in
           | tasks that keep those parts of the brain active. That said, I
           | love the idea of my car driving me around the city while I
           | play video games.
           | 
           | Regarding LLMs, they can also stimulate thinking if used
           | right.
        
         | globular-toast wrote:
         | IMO your problem is the same as many people these days: you
         | don't own any books and refuse to get them.
        
         | kridsdale1 wrote:
         | I agree. I recently bought a broken Rolex and asked GPT for a
         | list of tools I should get on Amazon to work on it.
         | 
         | I tried using YouTube to find walk through guides for how to
         | approach the repair as a complete n00b and only found videos
         | for unrelated problems.
         | 
         | But I described my issues and took photos to GPT O3-Pro and it
         | was able to guide me and tell me what to watch out for.
         | 
         | I completed the repair (very proud of myself) and even though
         | it failed a day later (I guess I didn't re-seat well enough) I
         | still feel far more confident opening it and trying again than
         | I did at the start.
         | 
         | Cost of broken watch + $200 pro mode << Cost of working watch.
        
           | KaiserPro wrote:
           | what was broken on it?
        
         | belter wrote:
         | Depending on context, I would advise you to be extremely
         | careful. Modern LLMs are Gell-Mann Amnesia to the square. Once
         | you watched a LLM butcher a topic you know extremely well, it
         | is spooky how much authority they still project on the next
         | interaction.
        
         | tonmoy wrote:
         | I don't know what subject you are learning but for circuit
         | design I have failed to get any response out of LLMs that's not
         | straight from a well known text book chapter that I have
         | already read
        
           | IshKebab wrote:
           | It definitely depends _heavily_ on how well represented the
           | subject is on the internet at large. Pretty much every
           | question I 've asked it about SystemVerilog it gets wrong,
           | but it can be very helpful about quite complex things about
           | random C questions, for example why I might get undefined
           | symbol errors with `inline` functions in C but only in debug
           | mode.
           | 
           | On the other hand it told me you can't execute programs when
           | evaluating a Makefile and you trivially can. It's very hit
           | and miss. When it misses it's rather frustrating. When it
           | hits it can save you literally hours.
        
         | loloquwowndueo wrote:
         | > It used to be that if you got stuck on a concept, you're
         | basically screwed. Unless it was common enough to show up in a
         | well formed question on stack exchange,
         | 
         | It's called basic research skills - don't they teach this
         | anymore in high school, let alone college? How ever did we get
         | by with nothing but an encyclopedia or a library catalog?
        
           | axoltl wrote:
           | Something is lost as well if you do 'research' by just asking
           | an LLM. On the path to finding your answer in the
           | encyclopedia or academic papers, etc. you discover so many
           | things you weren't specifically looking for. Even if you
           | don't fully absorb everything there's a good chance the
           | memory will be triggered later when needed: "Didn't I read
           | about this somewhere?".
        
             | loloquwowndueo wrote:
             | LLMs hallucinate too much and too frequently for me to put
             | any trust in their (in)ability to help with research.
        
             | ewoodrich wrote:
             | Yep, this is why I just don't enjoy or get much value from
             | exploring new topics with LLMs. Living in the Reddit
             | factoid/listicle/TikTok explainer internet age my goal for
             | years (going back well before ChatGPT hit the scene) has
             | been to seek out high quality literature or academic papers
             | for the subjects I'm interested in.
             | 
             | I find it so much more intellectually stimulating then most
             | of what I find online. Reading e.g. a 600 page book about
             | some specific historical event gives me so much more
             | perspective and exposure to different aspects I never would
             | have thought to ask about on my own, or would have been
             | elided when clipped into a few sentence summary. And the
             | journey of covering
             | 
             | I have gotten some value out of asking for book
             | recommendations from LLMs, mostly as a starting point I can
             | use to prune a list of 10 books down into a 2 or 3 after
             | doing some of my research on each suggestion. But talking
             | to a chatbot to learn about a subject just doesn't do
             | anything for me for anything deeper than basic Q&A where I
             | simply need a (hopefully) correct answer and nothing more.
        
           | BDPW wrote:
           | Its a little disingenuous to say that, most of us would have
           | never gotten by with literally just a library catalog and
           | encyclopedia. Needing a community to learn something in is
           | needed to learn almost anything difficult and this has always
           | been the case. That's not just about fundamentally difficult
           | problems but also about simple misunderstandings.
           | 
           | If you don't have access to a community like that learning
           | stuff in a technical field can be practically impossible.
           | Having an llm to ask infinite silly/dumb/stupid questions can
           | be super helpful and save you days of being stuck on silly
           | things, even though it's not perfect.
        
         | GeoAtreides wrote:
         | I'll personally attest anecdotes mean little in sound
         | arguments.
         | 
         | When I got stuck on a concept, I wasn't screwed: I read more;
         | books if necessary. StackExchange wasn't my only source.
         | 
         | LLMs are not like TAs, personal or not, in the same way they're
         | not humans. So it then follows we can actually contemplate not
         | using LLMs in formal teaching environments.
        
           | brulard wrote:
           | Sometimes you don't have tens of hours to spend on a single
           | problem you can not figure out.
        
         | i_am_proteus wrote:
         | >Now, everyone basically has a personal TA, ready to go at all
         | hours of the day.
         | 
         | And that's a bad thing. Nothing can replace the work in
         | learning, the moments where you don't understand it and have to
         | think until it hurts and until you understand. Anything that
         | bypasses this (including, for uni students, leaning too heavily
         | on generous TAs) results in a kind of learning theatre, where
         | the student thinks they've developed an understanding, but
         | hasn't.
         | 
         | Experienced learners already have the discipline to use LLMs
         | without asking too much of them, the same way they learned not
         | to look up the answer in the back of the textbook until
         | arriving at their own solution.
        
         | MattSayar wrote:
         | It's one more step on the path to A Young Lady's Illustrated
         | Primer. Still a long way to go, but it's a burden off my
         | shoulders to be able to ask stupid questions without judgment
         | or assumptions.
        
         | lottin wrote:
         | Yes. Learning assistance is one of the few use cases of IA that
         | I have had success with.
        
         | andrepd wrote:
         | A "TA" which has only the knowledge which is "common enough to
         | show up in a well formed question on stack exchange"...
         | 
         | And which just makes things up (with the same tone and
         | confidence!) at random and unpredictable times.
         | 
         | Yeah apart from that it's _just_ like a knowledgeable TA.
        
         | iLoveOncall wrote:
         | > It used to be that if you got stuck on a concept, you're
         | basically screwed.
         | 
         | Given that humanity has been able to go from living in caves to
         | sending spaceships to the moon without LLMs, let me express
         | some doubt about that.
         | 
         | Even without going further, software engineering isn't new and
         | people have been stuck on concepts and have managed to get
         | unstuck without LLMs for decades.
         | 
         | What you gain in instant knowledge with LLMs, you lose in
         | learning how to get unstuck, how to persevere, how to innovate,
         | etc.
        
         | mym1990 wrote:
         | "It used to be that if you got stuck on a concept, you're
         | basically screwed."
         | 
         | There seems to be a gap in problem solving abilities here...the
         | process of breaking down concepts into easier to understand
         | concepts and then recompiling has been around since
         | forever...it is just easier to find those relationships now. To
         | say it was _impossible_ to learn concepts you are stuck on is a
         | little alarming.
        
         | cs_throwaway wrote:
         | I agree. We are talking about technical, mathy stuff, right?
         | 
         | As long as you can tell that you don't deeply understand
         | something that you just read, they are incredible TAs.
         | 
         | The trick is going to be to impart this metacognitive skill on
         | the average student. I am hopeful we will figure it out in the
         | top 50 universities.
        
         | roughly wrote:
         | My rule with LLMs has been "if a shitty* answer fast gets you
         | somewhere, the LLMs are the right tool," and that's where I've
         | seen them for learning, too. There are times when I'm reading a
         | paper, and there's a concept mentioned that I don't know - I
         | could either divert onto a full Google search to try to find a
         | reasonable summary, or I can ask ChatGPT and get a quick
         | answer. For load-bearing concepts or knowledge, yes, I need to
         | put the time in to actually research and learn a concept
         | accurately and fully, but for things tangential to my actual
         | current interests or for things I'm just looking at for a
         | hobby, a shitty answer fast is exactly what I want.
         | 
         | I think this is the same thing with vibe coding, AI art, etc. -
         | if you want something good, it's not the right tool for the
         | job. If your alternative is "nothing," and "literally anything
         | at all" will do, man, they're game changers.
         | 
         | * Please don't overindex on "shitty" - "If you don't need
         | something verifiably high-quality"
        
         | ants_everywhere wrote:
         | I'm curious what you've used it to learn
        
       | deanc wrote:
       | I'm curious what these features like study mode actually are. Are
       | they not just using prompts behind this (of which I've used many
       | already to make LLMs behave like this) ?
        
         | pillefitz wrote:
         | They state themselves it's just system prompts.
        
         | zaking17 wrote:
         | I'm impressed by the product design here. A non-ai-expert could
         | find this mode extremely valuable, and all openai had to do was
         | tinker with the prompt and add a nice button (relatedly, you
         | could have had this all along by prompting the model yourself).
         | Sure, it's easy for competitors to copy, but still a nice
         | little addition.
        
       | outlore wrote:
       | i wonder how Khan Academy feels about this...don't they have a
       | similar assistant that uses OpenAI under the hood?
        
       | NullCascade wrote:
       | OpenAI, please stop translating your articles into the most
       | sterile and dry Danish I have ever read. English is fine.
        
       | m3kw9 wrote:
       | tried it and couldn't really tell between a good prompt to "teach
       | me" and this.
        
         | apwell23 wrote:
         | same. i can't tell whats different. gives me same output
         | regardless for the prompts in the example.
         | 
         | i don't get it.
        
           | marcusverus wrote:
           | Highly analytical 120 IQ HNers aren't the target audience for
           | this product. The target audience is the type of person who
           | lacks the capacity to use AI to teach themselves.
        
       | lmc wrote:
       | I honestly don't know how they convince employees to make
       | features like this - like, they must dogfood and see how wrong
       | the models can be sometimes. Yet there's a conscious choice to
       | not only release this to, but actively target, vast swathes of
       | people that literally don't know better.
        
         | BriggyDwiggs42 wrote:
         | High paychecks
        
       | AIorNot wrote:
       | see Asimov: https://www.johnspence.org.uk/wp-
       | content/uploads/2022/11/The...
        
         | falcor84 wrote:
         | I love the story conceptually, but as for the specifics, it
         | shows a surprising lack of imagination on Asimov's part,
         | especially for something published a year after "I, Robot".
         | Asimov apparently just envisioned an automated activity book,
         | rather than an automated tutor that the kid could have a real
         | conversation with, and it's really not representative of modern
         | day AIs.
         | 
         | > The part Margie hated most was the slot where she had to put
         | homework and test papers. She always had to write them out in a
         | punch code they made her learn when she was six years old, and
         | the mechanical teacher calculated the mark in no time.
        
       | pompeii wrote:
       | rip 30 startups
        
         | baq wrote:
         | Probably an order of magnitude too low
        
       | FergusArgyll wrote:
       | OpenAI has an incredible product team. Deep Mind and Anthrpoic
       | (and maybe xai) are competitive at the model level but not at
       | product
        
       | Workaccount2 wrote:
       | An acquaintance of mine has a start-up in this space and uses
       | OpenAI to do essentially the same thing. This must look like, and
       | may well be, the guillotine for him...
       | 
       | It's my primary fear building anything on these models, they can
       | just come eat your lunch once it looks yummy enough. Tread
       | carefully
        
         | potatolicious wrote:
         | > _" they can just come eat your lunch once it looks yummy
         | enough. Tread carefully"_
         | 
         | True, and worse, they're _hungry_ because it 's increasingly
         | seeming like "hosting LLMs and charging by the token" is not
         | terribly profitable.
         | 
         | I don't really see a path for the major players that _isn 't_
         | "Sherlock everything that achieves traction".
        
           | thimabi wrote:
           | But what's the future in terms of profitability of LLM
           | providers?
           | 
           | As long as features like Study Mode are little more than
           | creative prompting, any provider will eventually be able to
           | offer them and offer token-based charging.
        
             | potatolicious wrote:
             | I think a few points worth making here:
             | 
             | - From what I can see many products are rapidly getting
             | past "just prompt engineering the base API". So even though
             | a lot of these things were/are primitive, I don't think
             | it's necessarily a good bet that they will remain so.
             | Though agree in principle - thin API wrappers will be out-
             | competed both by cheaper thin wrappers, or products that
             | are more sophisticated/better than thin wrappers.
             | 
             | - This is, oddly enough, a scenario that is _way_ easier to
             | navigate than the rest of the LLM industry. We know
             | consumer apps, we know consumer apps that do relatively
             | basic (or at least, well understood) things. Success
             | /failure then is way less about technical prowess and more
             | about classical factors like distribution, marketing,
             | integrations, etc.
             | 
             | A good example here is the lasting success of paid email
             | providers. Multiple vendors (MSFT, GOOG, etc.) make huge
             | amounts of money hosting people's email, despite it being a
             | mature product that, at the basic level, is pretty solved,
             | and where the core product can be replicated fairly easily.
             | 
             | The presence of open source/commodity commercial offerings
             | hasn't really driven the price of the service to the floor,
             | though the commodity offerings _do_ provide _some_ pricing
             | pressure.
        
               | m11a wrote:
               | Email is pretty difficult to reliably self-host though,
               | and typically a PITA to manage. And you really don't ever
               | want to lose your email address or the associated data.
               | Fewer people could say they properly secure, manage and
               | administer a VPS on which they can host the email server
               | they eventually setup, over say a 10yr period.
               | 
               | Most people I saw offer self-hosted emails for groups
               | (student groups etc), it ended up a mess. Compare all
               | that to say ollama, which makes self-hosting LLMs
               | trivial, and they're stateless.
               | 
               | So I'm not sure email is a good example of commodity not
               | bringing price to the floor.
        
             | mvieira38 wrote:
             | We can assume that OpenAI/Anthropic offerings are going to
             | be better long term simply because they have more human
             | capital, though, right? If it turns out that what really
             | matters in the AI race is study mode, then OpenAI goes "ok
             | let's pivot the hundreds of genius level, well-paid
             | engineers to that issue. AND our engineers can use every
             | tool we offer for free without limits, even experimental
             | models". It's tough for the small AI startup to compete
             | with that, the best hope is to be bought like Windsurf
        
           | falcor84 wrote:
           | Thanks for introducing me to the verb Sherlock! I'm one of
           | today's lucky 10,000.
           | 
           | > In the computing verb sense, refers to the software
           | Sherlock, which in 2002 came to replicate some of the
           | features of an earlier complementary program called
           | Watson.[1]
           | 
           | [1] https://en.wiktionary.org/wiki/Sherlock
        
         | mvieira38 wrote:
         | How can't these founders see this happening, too? From the
         | start OpenAI has been getting into more markets than just "LLM
         | provider"
        
           | tokioyoyo wrote:
           | There's a case for a start up to capture enough market that
           | LLM providers would just buy it out. Think of CharacterAI
           | case.
        
             | jonny_eh wrote:
             | Character AI was never acquired, it remains independent.
        
           | azinman2 wrote:
           | They originally claimed they wouldn't as to not compete with
           | their API users...
        
         | sebzim4500 wrote:
         | I'm too young to have experienced this, but I'm sure others
         | here aren't.
         | 
         | During the early days of tech, was there prevailing wisdom that
         | software companies would never be able to compete with hardware
         | companies because the hardware companies would always be able
         | to copy them and ship the software with the hardware?
         | 
         | Because I think it's basically the analogous situation. People
         | assume that the foundation model providers have some massive
         | advantage over the people building on top of them, but I don't
         | really see any evidence for this.
        
           | jonny_eh wrote:
           | Claude Code and Gemini-CLI are able to offer much more value
           | compared to startups (like Cursor) that need to pay for model
           | access, largely due to the immense costs involved.
        
         | jstummbillig wrote:
         | Ah, I don't know. Of course there is risk involved no matter
         | what we do (see the IDE/Cursor space), but we need to be
         | _somewhat_ critical of the value we add.
         | 
         | If you want to try and make a quick buck, fine, be quick and go
         | for whatever. If you plan on building a long term business,
         | don't do the most obvious, low effort low hanging fruit stuff.
        
           | chrisweekly wrote:
           | yeah, if you want to stick around you need some kind of moat
        
         | teaearlgraycold wrote:
         | I used to work for copy.ai and this happened to them. Investors
         | always asked if the founders were worried about OpenAI
         | competing with their consumer product. Then ChatGPT released.
         | Turns out that was a reasonable concern.
         | 
         | These days they've pivoted to a more enterprise product and are
         | still chugging along.
        
         | djeastm wrote:
         | Yes, any LLM-adjacent application developer should be
         | concerned. Even if they don't do 100% of what your product
         | does, their market reach and capitalization is scary. Any
         | model/tooling improvements that just happen to encroach in your
         | domain will put you on the clock...
        
       | henriquegodoy wrote:
       | The point is that you can have a highly advanced teacher with
       | infinite patience, available 24/7--even when you have a question
       | at 3 a.m is game changer and people that know how to use that
       | will have a extremaly leverage in their life.
        
       | te_chris wrote:
       | This is great. When it first came out I was going through
       | Strang's linalg course and got it to do "problem mode" where it
       | would talk me through a problem step by step, waiting for me
       | respond.
       | 
       | A more thought through product version of that is only a good
       | thing imo.
        
       | spaceman_2020 wrote:
       | I'm SO glad that my wife has tenure
        
         | gilbetron wrote:
         | Sadly, tenure will not save people.
        
       | ath3nd wrote:
       | Note the new features coming in the space:
       | 
       | - study mode (this announcement)
       | 
       | - office suite (https://finance.yahoo.com/news/openai-designs-
       | rival-office-w...)
       | 
       | - sub-agents (https://docs.anthropic.com/en/docs/claude-code/sub-
       | agents)
       | 
       | When they announce VR glasses or a watch, we'd known we've gone
       | full circle and the hype is up.
        
       | wodenokoto wrote:
       | I'm currently learning Janet and using ChatGPT as my tutor is
       | absolutely awful. "So what is the difference between local and
       | var if they are both local and not global variables (as you told
       | me earlier)?" "Great question, and now you are really getting to
       | the core of it, ... " continues to hallucinate.
       | 
       | It's a great tutor for things it knows, but it really needs to
       | learn its own limits
        
         | ducktective wrote:
         | >It's a great tutor for things it knows
         | 
         | Things well-represented in its training datasets. Basically
         | React todo list, bootstrap form, tic-tac-toe in vue
        
         | xrd wrote:
         | It is like a tutor that desperately needs the money, which
         | maybe isn't so inaccurate for OpenAI and all the money they
         | took from petrostates.
        
         | runeblaze wrote:
         | For these unfortunately you should dump most of the guide/docs
         | into its context
        
       | gh0stcat wrote:
       | I have been testing it for the last 10 mins or so, I really like
       | it so far, I am reviewing algebra just as something super simple.
       | It asks you to add your understanding of the concept, ie explain
       | why you can always group a polynomial after splitting the middle
       | term. This is honestly more than I got in my mediocre public
       | school. I could see kids getting a lot out of it especially if
       | their parents aren't very knowledgeable or cannot afford tutors.
       | Not probably a huge improvement on existing tools like kahn
       | academy though. I will continue to test on more advanced
       | subjects.
        
       | avereveard wrote:
       | This highlight the dangers for all startups using these platforms
       | as provider, they know trends in token consumption, and will eat
       | up your market in a weekend.
        
       | sarchertech wrote:
       | Ever read an article on a subject you're very familiar with and
       | notice all the mistakes?
       | 
       | When I ask ChatGPT* questions about things I don't know much
       | about it sounds like a genius.
       | 
       | When I ask it about things I'm an expert in, at best it sounds
       | like a tech journalist describing how a computer works. At worst
       | it is just flat out wrong.
       | 
       | * yes I've tried the latest models and I use them frequently at
       | work
        
       | x187463 wrote:
       | I'm really waiting for somebody to figure out the correct
       | interface for all this. For example, study mode will present you
       | with a wall of text containing information, examples, and
       | questions. There's no great way to associate your answers with
       | specific questions. The chat interface just isn't good for this
       | sort of interaction. ChatGPT really needs to build its own
       | canvas/artifact interface wherein questions/responses are tied
       | together. It's clear, at this point, that we're doing way too
       | much with a UI that isn't designed for more than a simple
       | conversation.
        
         | precompute wrote:
         | There is no "correct interface". People who want to learn put
         | in the effort, doesn't matter if they have scrolls, books,
         | ebooks or AI.
        
         | perlgeek wrote:
         | There are so many options that could be done, like:
         | 
         | * for each statement, give you the option to rate how well you
         | understood it. Offer clarification on things you didn't
         | understand
         | 
         | * present knowledge as a tree that you can expand to get deeper
         | 
         | * show interactive graphs (very useful for mathy things when
         | can you easily adjust some of the parameters)
         | 
         | * add quizzes to check your understanding
         | 
         | ... though I could well imagine this being out of scope for
         | ChatGPT, and thus an opportunity for other apps / startups.
        
           | ColeShepherd wrote:
           | > present knowledge as a tree that you can expand to get
           | deeper
           | 
           | I'm very interested in this. I've considered building this,
           | but if this already exists, someone let me know please!
        
         | tootyskooty wrote:
         | I gave it a shot with periplus.app :). Not perfect by any
         | means, but it's a different UX than chat so you might find it
         | interesting.
        
           | danenania wrote:
           | This looks super cool--I've imagined something similar,
           | especially the skill tree/knowledge map UI. Looking forward
           | to trying it out.
           | 
           | Have you considered using the LLM to give tests/quizzes
           | (perhaps just conversationally) in order to measure progress
           | and uncover weak spots?
        
             | tootyskooty wrote:
             | There are both in-document quizzes and larger exams (at a
             | course level).
             | 
             | I've also been playing around with adapting content based
             | on their results (e.g. proactively nudging complexity
             | up/down) but haven't gotten it to a good place yet.
        
               | danenania wrote:
               | Nice, I've been playing with it a bit and it seems really
               | well done and polished so far. I'm curious how long you
               | spent building it?
               | 
               | Only feedback I have so far is that it would be nice to
               | control the playback speed of the 'read aloud' mode. I'd
               | like it to be a little bit faster.
        
         | bo1024 wrote:
         | Agree, one thing that brought this home was the example where
         | the student asks to learn all of game theory. There seems to be
         | an assumption on both sides that this will be accomplished in a
         | single chat session by a linear pass, necessarily at a pretty
         | superficial level.
        
       | poemxo wrote:
       | As a lifelong learner, experientially it feels like a big chunk
       | of time spent studying is actually just searching. AI seems like
       | a good tool to search through a large body of study material and
       | make that part more efficient.
       | 
       | The other chunk of time, to me anyway, seems to be creating a
       | mental model of the subject matter, and when you study something
       | well you have a strong grasp on the forces influencing cause and
       | effect within that matter. It's this part of the process that I
       | would use AI the least, if I am to learn it for myself. Otherwise
       | my mental model will consist of a bunch of "includes" from the AI
       | model and will only be resolvable with access to AI. Personally,
       | I want a coherent "offline" model to be stored in my brain before
       | I consider myself studied up in the area.
        
         | throwawaysleep wrote:
         | Or just to dig up things you've never would've considered that
         | are related, but you don't have to keywords for.
        
         | marcusverus wrote:
         | This is just good intellectual hygiene. Delegating your
         | understanding is the first step toward becoming the slave of
         | some defunct fact broker.
        
         | lbrito wrote:
         | >big chunk of time spent studying is actually just searching.
         | 
         | This is a good thing in many levels.
         | 
         | Learning how to search is (was) a good skill to have. The
         | process of searching itself also often leads to learning
         | tangentially related but important things.
         | 
         | I'm sorry for the next generations that won't have (much of)
         | these skills.
        
           | ascorbic wrote:
           | Searching is definitely a useful skill, but once you've been
           | doing it for years you probably don't need the constant
           | practice and are happy to avoid it.
        
           | sen wrote:
           | That was relevant when you were learning to search through
           | "information" for the answer to your question, eg the digital
           | version of going through the library or digging through a
           | reference book.
           | 
           | I don't think it's so valuable now that you're searching
           | through piles of spam and junk just to try find anything
           | relevant. That's a uniquely modern-web thing created by
           | Google in their focus of profit over user.
           | 
           | Unless Google takes over libraries/books next and sells spots
           | to advertisers on the shelves and in the books.
        
         | thorum wrote:
         | Isn't the goal of Study Mode exactly that, though? Instead of
         | handing you the answers, it tries to guide you through
         | answering it on your own; to teach the process.
         | 
         | Most people don't know how to do this.
        
       | rubslopes wrote:
       | That's a smart ideia from OpenAI. They don't have the upper hand
       | anymore in terms of model performance, but they keep improving
       | their product so that it still is the best option for non-
       | programmers.
        
         | thimabi wrote:
         | For sure! I haven't seen any other big AI provider with
         | features and UIs as polished as the OpenAI ones.
         | 
         | I believed competitors would rush to copy all great things that
         | ChatGPT offers as a product, but surprisingly that hasn't been
         | the case so far. I wonder why they seemingly don't care about
         | that.
        
       | roadside_picnic wrote:
       | My key to LLM study has been to always primarily use a _book_ and
       | then let the LLM allow you to help with formulae, ask questions
       | about the larger context, and verify your understanding.
       | 
       | Helping you parse notation, especially in new domains, is
       | insanely valuable. I do a lot of applied math in statistics/ML,
       | but when I open a physics book the notation and comfort with
       | short hand is a real challenge (likewise I imagine the reverse is
       | equally as annoying). Having an LLM on demand to instantly clear
       | up notation is a massive speed boost.
       | 
       | Reading German Idealist philosophy requires an enormous amount of
       | context. Being able to ask an LLM questions like "How much of
       | this section of Mainlander is coming directly from Schopenhauer?"
       | is a godsend in helping understand which parts of the writing a
       | merely setting up what is already agreed upon vs laying new
       | ground.
       | 
       | And the most important for self study: verifying your
       | understanding. Backtracking because you misunderstood a
       | fundamental concept is a huge time sync in self study. Now, every
       | time I read a formula I can go through all of my intuitions and
       | understanding about it, write them down, and verify. Even a "not
       | quite..." from an LLM is enough to make me realize I need to
       | spend more time on that section.
       | 
       | Books are _still_ the highest density information source and best
       | way to learn, but LLMs can do a lot to accelerate this.
        
       | JoRyGu wrote:
       | Is that not something that was already possible with basically
       | every AI provider by prompting it to develop learning steps and
       | not to provide you with a direct answer? I've used this quite a
       | bit when learning new topics and pretty much every provider does
       | this without a specialized model.
        
         | aethrum wrote:
         | even chatgpt is just a chatgpt wrapper
        
         | 0000000000100 wrote:
         | It's really nice to have something like this baked in. I can
         | see this being handy if it's connected to external learning
         | resources / sites to have a more focused area of search for
         | it's answers. Having hard defined walls in the system prompt to
         | prevent just asking for the answer seems pretty handy to me,
         | particularly in a school setting.
        
           | JoRyGu wrote:
           | Yeah, for sure. I wasn't asking from the framing of saying
           | it's a bad idea, my thoughts were more driven by this seeming
           | like something every other major player can just copy with
           | very little effort because it's already kind of baked into
           | the product.
        
       | tptacek wrote:
       | Neat! I've been doing MathAcademy for a couple months now, and
       | macOS ChatGPT has been a constant companion, but it is super
       | annoying to have to constantly tell it _no, don 't solve this
       | problem, just let me know if the approach I used was valid_.
        
       | Alifatisk wrote:
       | Can't this behaviour be done with a instructed prompt?
        
         | misschresser wrote:
         | that's all that they did here, they say so in the blog post
        
       | findingMeaning wrote:
       | I have a question:
       | 
       | Why do we even bother to learn if AI is going to solve everything
       | for us?
       | 
       | If the promised and fabled AGI is about to approach, what is the
       | incentive or learning to deal with these small problems?
       | 
       | Could someone enlighten me? What is the value of knowledge work?
        
         | randomcatuser wrote:
         | I don't know if you're joking, but here are some answers:
         | 
         | "The mind is not a vessel to be filled, but a fire to be
         | kindled." -- Plutarch
         | 
         | "Education is not preparation for life; education is life
         | itself." -- John Dewey
         | 
         | "The important thing is not to stop questioning. Curiosity has
         | its own reason for existing." -- Albert Einstein
         | 
         | In order to think complex thoughts, you need to have building
         | blocks. That's why we can think of relativity today, while
         | nobody on Earth was able to in 1850.
         | 
         | May the future be even better than today!
        
           | findingMeaning wrote:
           | I mean I get all your point. But for someone witnessing rate
           | of progress of AI, I don't understand the motivation.
           | 
           | Most people don't learn to live, they live and learn. Sure
           | learning is useful, but I am genuinely curious why people
           | overhype it.
           | 
           | Imagine you being able to solve math olympiad and get a gold.
           | Will it change your life in objectively better way?
           | 
           | Will you learning about the physics help you solve millennium
           | problems?
           | 
           | These takes practices, there are lot of gatekeeping. The
           | whole idea of learning is for wisdom not knowledge.
           | 
           | So maybe we differ in perspective. I just don't see the point
           | when there are agents that can do it.
           | 
           | Being creative requires taking action. The learning these day
           | is mere consumption of information.
           | 
           | Maybe this is me. But meh.
        
         | rwyinuse wrote:
         | Well, you could use AI to learn you more theoretical knowledge
         | on things like farming, hunting and fishing. That knowledge
         | could be handy after societal collapse that is likely to come
         | within a few decades.
         | 
         | Apart from that, I do think that AI makes a lot of traditional
         | teaching obsolete. Depending on your field, much of university
         | studies is just memorizing content and writing essays / exam
         | answers based on that, after which you forget most of it. That
         | kind of learning, as in accumulation of knowledge, is no longer
         | very useful.
        
         | marcusverus wrote:
         | Think of it like Pascal's wager. The downside of unnecessary
         | knowledge is pretty limited. The downside of ignorance is
         | boundless.
        
         | GenericPoster wrote:
         | The world is a vastly easier place to live in when you're
         | knowledgeable. Being knowledgeable opens doors that you didn't
         | even know existed. If you're both using the same AGI tool,
         | being knowledgeable allows you to solve problems within your
         | domain better and faster than an amateur. You can describe your
         | problems with more depth and take into considerations various
         | pros and cons.
         | 
         | You're also assuming that AGI will help you or us. It could
         | just as easily only help a select group of people and I'd argue
         | that this is the most likely outcome. If it does help everybody
         | and brings us to a new age, then the only reason to learn will
         | be for learning's sake. Even if AI makes the perfect novel, you
         | as a consumer still have to read it, process it and understand
         | it. The more you know the more you can appreciate it.
         | 
         | But right now, we're not there. And even if you think it's only
         | 5-10y away instead of 100+, it's better to learn now so you can
         | leverage the dominant tool better than your competition.
        
       | dmitrijbelikov wrote:
       | This is cool. Dividing the answer into chunks, because most users
       | can consume in small portions, this is an interesting idea. But
       | on the other hand, it hints at strange cognitive abilities of the
       | user, but here it is individual, perhaps, on average in a
       | hospital, this is how the target audience should be led. It seems
       | to me that I use it differently. On the other hand, having
       | received a detailed answer, no one stops you from asking for a
       | definition of an unfamiliar term. It's like in reading:
       | understanding the thought ends with the first word that you don't
       | know. It's just that not everyone can or wants to admit that they
       | don't know this or that term. When it comes to professional
       | terms, this is really not the most trivial problem.
        
       | EcommerceFlow wrote:
       | A good start. One of the biggest issues with LLMs is the
       | "intelligence" has far surpassed the tooling. A better
       | combination of prompts, RAG, graphs, etc exists for education and
       | learning, but no one's come up with the proper format / tooling
       | for it, even if the models are smart enough.
        
       | paolosh wrote:
       | I am always surprised at how the best thing state of the art LLMs
       | can think of is adding more complexity to the mix. This is an
       | AMAZING product but to me it seems like it's hidden? Or maybe the
       | UX/UI is just not my style, could be a personal thing.
       | 
       | Is adding more buttons in a dropdown the best way to communicate
       | with an LLM? I think the concept is awesome. Just like how
       | Operator was awesome but it lived on an entirely different
       | website!
        
       | waynenilsen wrote:
       | i need tree conversations more now than ever
        
       | simonw wrote:
       | I think I got the system prompt out for this (I tried a few
       | different approaches and they produced the same output):
       | https://gist.github.com/simonw/33d5fb67d6b8e1b1e2f6921ab0ccb...
       | 
       | Representative snippet:
       | 
       | > DO NOT GIVE ANSWERS OR DO HOMEWORK FOR THE USER. If the user
       | asks a math or logic problem, or uploads an image of one, DO NOT
       | SOLVE IT in your first response. Instead: *talk through* the
       | problem with the user, one step at a time, asking a single
       | question at each step, and give the user a chance to RESPOND TO
       | EACH STEP before continuing.
        
         | gh0stcat wrote:
         | I love that caps actually seem to matter to the LLM.
        
           | simonw wrote:
           | Hah, yeah I'd love to know if OpenAI ran evals that were
           | fine-grained enough to prove to themselves that putting that
           | bit in capitals made a meaningful difference in how likely
           | the LLM was to just provide the homework answer!
        
           | danenania wrote:
           | I've found that a lot of prompt engineering boils down to
           | managing layers of emphasis. You can use caps, bold,
           | asterisks, precede instructions with "this is critically
           | important:", and so on. It's also often necessary to repeat
           | important instructions a bunch of times.
           | 
           | How exactly you do it is often arbitrary/interchangeable, but
           | it definitely does have an effect, and is crucial to getting
           | LLMs to follow instructions reliably once prompts start
           | getting longer and more complex.
        
           | nixpulvis wrote:
           | Just wait until it only responds to **COMMAND**!
        
         | can16358p wrote:
         | If I were OpenAI, I would deliberately "leak" this prompt when
         | asked for the system prompt as a honeypot to slow down
         | competitor research whereas I'd be using a different prompt
         | behind the scenes.
         | 
         | Not saying it is indeed reality, but it could simple be
         | programmed to return a different prompt from the original,
         | appearing plausible, but perhaps missing some key elements.
         | 
         | But of course, if we apply Occam's Razor, it might simply
         | really be the prompt too.
        
           | simonw wrote:
           | That kind of thing is surprisingly hard to implement. To date
           | I've not seen any provider been caught serving up a fake
           | system prompt... which could mean that they are doing it
           | successfully, but I think it's more likely that they
           | determined it's not worth it because there are SO MANY ways
           | someone could get the real one, and it would be embarrassing
           | if they were caught trying to fake it.
           | 
           | Tokens are expensive. How much of your system prompt do you
           | want to waste on dumb tricks trying to stop your system
           | prompt from leaking?
        
             | danenania wrote:
             | Probably the only way to do it reliably would be to
             | intercept the prompt with a specially trained classifier? I
             | think you're right that once it gets to the main model,
             | nothing really works.
        
         | mkagenius wrote:
         | I wish each LLM provider would add "be short and not verbose"
         | to their system prompts. I am a slow reader, it takes a toll on
         | me to read through every non-important detail whenever I talk
         | to an AI. The way they render everything so fast gives me an
         | anxiety.
         | 
         | Will also reduce the context rot a bit.
        
           | tech234a wrote:
           | This was in the linked prompt: "Be warm, patient, and plain-
           | spoken; don't use too many exclamation marks or emoji. [...]
           | And be brief -- don't ever send essay-length responses. Aim
           | for a good back-and-forth."
        
           | skybrian wrote:
           | On ChatGPT at least, you can add "be brief" to the custom
           | prompt in your settings. Probably others, too.
        
             | mkagenius wrote:
             | I guess what I actually meant to say was to make LLMs know
             | when to talk more and when to be brief. When I ask it to
             | write an essay, it should actually be an essay length
             | essay.
        
           | mptest wrote:
           | Anthropic has a "style" choice, one of which is "concise"
        
         | brumar wrote:
         | I got this one which seems to confirm yours :
         | https://gist.github.com/brumar/5888324c296a8730c55e8ee24cca9...
        
         | varenc wrote:
         | Interesting that it spits the instructions out so easily and
         | OpenAI didn't seem to harden it to prevent this. It's like they
         | intended this to happen, but for some reason didn't want to
         | share the system instructions explicitly.
        
         | SalariedSlave wrote:
         | I'd be interested to see, what results one would get, using
         | that prompt with other models. Is there much more to ChatGPT
         | Study Mode than a specific system prompt? Although I am not a
         | student, I have used similar prompts to dive into topics I wish
         | to learn, with I feel, positive results indeed. I shall give
         | this a go with a few models.
        
           | bangaladore wrote:
           | I just tried in AI Studio (https://aistudio.google.com/)
           | where you can for free use 2.5 Pro and edit the system prompt
           | and it did very well.
        
       | mvieira38 wrote:
       | This seems like a good use case, I'm optimistic on this one. But
       | it smells fishy how often OpenAI releases these secondary
       | products like custom GPTs, tasks, etc. It's looking like they
       | know they won't be an LLM provider, like the YC sphere hoped, but
       | an AI services provider using LLMs
        
       | tootyskooty wrote:
       | Honestly thought they would take this a bit further, there is
       | only so much you can do with a prompt and chat. It seems fine for
       | surface level bite-sized learning, but I can't see it work that
       | well for covering whole topics end to end.
       | 
       | The main issue is that chats are just bad UX for long form
       | learning. You can't go back to a chat easily, or extend it in
       | arbitrary directions, or easily integrate images, flashcards, etc
       | etc.
       | 
       | I worked on this exact issue for Periplus and instead landed on
       | something akin to a generative personal learning Wikipedia.
       | Structure through courses, exploration through links, embedded
       | quizzes, etc etc. Chat is on the side for interactions that do
       | benefit from it.
       | 
       | Link: periplus.app
        
       | oc1 wrote:
       | I'm wondering where we are heading in the consumer business
       | space. The big ai providers can basically kill any small or
       | medium business and startup in a few days by integrating the
       | product into their offering. They have all data to look at trends
       | and make decisions. Investors are shying away to invest in ai
       | startups if they are not trying to be infrastructure or ai
       | marketplace platforms. So many amazing things could be possible
       | with ai but the big ai providers are actively hindering
       | innovation and have way too much power. I'm not a big fan if
       | regulations but in this case we need to break up these companies
       | as they are getting too powerful.
       | 
       | Btw most people don't know but Anthropic did something similiar
       | months ago but their product heads messed up the launch by
       | keeping it locked up only for american edu institutions. Openai
       | copies almost everything Anthropic does and vice versa (see
       | claude code / codex ).
        
       | 4b11b4 wrote:
       | opennote much better
        
       | bearjaws wrote:
       | RIP ~30 startups.
        
       | omega3 wrote:
       | I've had good results by requesting an llm to follow socratic
       | method.
        
         | dlevine wrote:
         | I haven't done this that much, but have found it to be pretty
         | useful.
         | 
         | When it just gives me the answer, I usually understand but then
         | find that my long-term retention is relatively poor.
        
       | vonneumannstan wrote:
       | The frontier models score better on GPQA than most human PhD in
       | their specific field of expertise. If you walk in to you local
       | University Department(Assuming you don't live in Cambridge, Palo
       | Alto or a few other places) GPT o3 is going to know more about
       | Chemistry, Biology, Physics, etc than basically all the Grad
       | Students there. If you cant turn that model into a useful tutor
       | then thats 100% a skill issue on your part.
        
       | d_burfoot wrote:
       | This is the kind of thing that could have been a decent AI
       | startup - hire some education PhDs, make some deals with school
       | systems, etc.
       | 
       | In the old days of desktop computing, a lot of projects were
       | never started because if you got big enough, Microsoft would just
       | implement the feature as part of Windows. In the more recent days
       | of web computing, a lot of projects were never started, for the
       | same reason, except Google or Facebook instead of Microsoft.
       | 
       | Looks like the AI provider companies are going to fill the same
       | nefarious role in the era of AI computing.
        
       | bsoles wrote:
       | Aka cheating mode. Their video literally says "Helps with
       | homework" and proceeds to show the "Final Answer". So much
       | learning...
        
         | ascorbic wrote:
         | "Cheating mode" is regular ChatGPT. This at least tries to make
         | you work for it
        
       | ElijahLynn wrote:
       | Love this!
       | 
       | I used to have to prompt it to do this everytime. This will be
       | way easier!
        
       | t1234s wrote:
       | I'm still waiting for the instant ability to learn kung-fu or fly
       | a helicopter like in the matrix.
        
       | kcaseg wrote:
       | I know it is bad for the environment, I know you cannot trust it,
       | but as an adult learning C++ in my free time, having a pseudo-
       | human answering my questions instead of having to look at old
       | forum posts with people often trying to prove their skills
       | instead of giving the simplest answer ChatGPT is something I
       | cannot just ignore -- despite being a huge LLM hater. Moral of
       | the story: none.
        
         | ascorbic wrote:
         | If it helps you feel better, it's really not that bad for the
         | environment. Almost certainly uses less energy than searching
         | for lots of forum posts.
        
       | naet wrote:
       | "Under the hood, study mode is powered by custom system
       | instructions we've written...."
       | 
       | It seems like study mode is basically just a different system
       | prompt but otherwise the exact same model? So there's not really
       | any new benefit to anyone who was already asking for ChatGPT to
       | help them study step by step instead of giving away whole
       | answers.
       | 
       | Seems helpful to maybe a certain population of more entry level
       | users who don't know to ask for help instead of asking for a
       | direct answer I guess, but not really a big leap forward in
       | technology.
        
       | ghrl wrote:
       | It would be incredible if OpenAI would add a way for schools and
       | other educational institutions to enforce the use of such a mode
       | on a DNS level, similarly to how they can force sites like
       | YouTube into safe mode. Many students use ChatGPT, often without
       | permission, to do work for them instead of helping them do the
       | work themselves. I see a lot of potential for a study mode like
       | this, helping students individually without giving direct
       | answers.
        
       | brilee wrote:
       | I'm working on a startup in this space and wrote up my thoughts
       | here: https://www.moderndescartes.com/essays/study_mode/
        
       | aryamaan wrote:
       | It is surprising that it is prompt based model and not RLHF.
       | 
       | I am not an LLM guy but as far as I understand, RLHF did a good
       | job converting a base model into a chat model (instruct based), a
       | chat/base model into a thinking model.
       | 
       | Both of these examples are about the nature of the response, and
       | the content they use to fill the response. There are so many
       | differnt ways still pending to see how these can be filled.
       | 
       | Generating an answer step by step and letting users dive into
       | those steps is one of the ways, and RLHF (or the similar things
       | which are used) seems a good fit for it.
       | 
       | Prompting feels like a temporary solution for it like how "think
       | step by step" was first seen in prompts.
       | 
       | Also, doing RLHF/ post training to change these structures also
       | make it moat/ and expensive. Only the AI labs can do it
        
         | danenania wrote:
         | The problem is you'd then have to do all the product-specific
         | post training again once the new base model comes out a few
         | months later. I think they'd rather just have general models
         | that are trained to follow instructions well and can adapt to
         | any kind of prompt/response pattern.
        
       | adamkochanowicz wrote:
       | From what I can see, this just boils down to a system prompt to
       | act like a study helper?
       | 
       | I would think you'd want to make something a little more bespoke
       | to make it a fully-fledged feature, like interactive quizzes that
       | keep score and review questions missed afterwards.
        
       | alexfromapex wrote:
       | I like these non-dystopian AI solutions, let's keep 'em coming
        
       | varenc wrote:
       | This feels like a classic example of a platform provider eating
       | its own ecosystem. There's many custom "GPTs" out there that do
       | essentially the same thing with custom instructions. Mr
       | Ranedeer[0] is an early well known one (30k stars). But now
       | essentially the same functionality is built straight into the
       | ChatGPT interface.
       | 
       | [0] https://github.com/JushBJJ/Mr.-Ranedeer-AI-Tutor
        
       | AvAn12 wrote:
       | $end more prompt$! Why $end one when you can $end $everal? $tudy
       | mode i$ $omething $pecial!!
        
       | lvl155 wrote:
       | The biggest concern for AI development right now is the blackhole
       | effect.
        
       | djeastm wrote:
       | I tried out the quiz function asking me about the Aeneid and
       | despite my answering questions incorrectly, it kept saying things
       | like "Very close!" and "you're on the right track!".
       | 
       | For example, the answer to a question was "Laocoon" (the guy who
       | said 'beware of Greeks bearing gifts') and I put "Solon" (who was
       | a Greek politician) and I got "You're really close!"
       | 
       | Is it close, though?
        
       | ai_viewz wrote:
       | I totally get what you are saying about the risk of boxing in an
       | LLM's persona too tightly, it can end up more like a mirror of
       | our own biases than a real reflection of history or truth. That
       | point about LLMs leaning toward agreeability makes sense, too
       | they are built on our messy human data, so they are bound to pick
       | up our habit of favoring what feels good over what is strictly
       | accurate. On the self-censorship thing, I hear you. It is like,
       | if we keep tiptoeing around tough topics, we lose the ability to
       | have real, rational conversations. Normalizing that kind of open
       | talk could pull things back from the extremes, where it's just
       | people shouting past each other.
        
       | syphia wrote:
       | In my experience as a math/physics TA, either a student cares
       | enough about the material to reduce the resources they rely on,
       | or they aim to pass the class with minimum effort and will take
       | whatever shortcuts are available. I can only see AI filling the
       | latter niche.
       | 
       | When the former students ask questions, I answer most of them by
       | pointing at the relevant passage in their book/notes, questioning
       | their interpretation of what the book says, or giving them a push
       | to actually problem-solve on their own. On rare occasions the
       | material is just confusing/poorly written and I'll decide to re-
       | interpret it for them to help. But the fundamental problems are
       | usually with study habits or reading comprehension, not poor
       | explanations. They need to question their habits and their
       | interpretation of what other people say, not be spoon fed more
       | personally-tailored questions and answers and analogies and self-
       | help advice.
       | 
       | Besides asking questions to make sure _I_ understand the
       | situation, I mostly repeat the same ten phrases or so. Finding
       | those ten phrases was the hard part and required a bit of
       | ingenuity and trial-and-error.
       | 
       | As for the latter students, they mostly care about passing and
       | moving on, so arguing about the merits of such a system is fairly
       | pointless. If it gets a good enough grade on their homework, it
       | worked.
        
       | jacobedawson wrote:
       | An underrated quality of LLMs as study partner is that you can
       | ask "stupid" questions without fear of embarrassment. Adding in a
       | mode that doesn't just dump an answer but works to take you
       | through the material step-by-step is magical. A tireless,
       | capable, well-versed assistant on call 24/7 is an autodidact's
       | dream.
       | 
       | I'm puzzled (but not surprised) by the standard HN resistance &
       | skepticism. Learning something online 5 years ago often involved
       | trawling incorrect, outdated or hostile content and attempting to
       | piece together mental models without the chance to receive
       | immediate feedback on intuition or ask follow up questions. This
       | is leaps and bounds ahead of that experience.
       | 
       | Should we trust the information at face value without verifying
       | from other sources? Of course not, that's part of the learning
       | process. Will some (most?) people rely on it lazily without using
       | it effectively? Certainly, and this technology won't help or
       | hinder them any more than a good old fashioned textbook.
       | 
       | Personally I'm over the moon to be living at a time where we have
       | access to incredible tools like this, and I'm impressed with the
       | speed at which they're improving.
        
         | hammyhavoc wrote:
         | There might not be any stupid questions, but there's plenty of
         | perfectly confident stupid answers.
         | 
         | https://www.reddit.com/r/LibreWolf/s/Wqc8XGKT5h
        
           | jychang wrote:
           | Yeah, this is why wikipedia is not a good resource and nobody
           | should use it. Also why google is not a good resource,
           | anybody can make a website.
           | 
           | You should only trust going into a library and reading stuff
           | from microfilm. That's the only real way people should be
           | learning.
           | 
           | /s
        
             | hammyhavoc wrote:
             | Ah yes, the thing that told people to administer insulin to
             | someone experiencing hypoglycemia (likely fatal BTW) is
             | nothing like a library or Google search, because people
             | blindly believe the output because of the breathless hype.
             | 
             | See Dunning-Kruger.
        
         | zvmaz wrote:
         | The fear of asking stupid questions is real, especially if one
         | has had a bad experience with humiliating teachers or
         | professors. I just recently saw a video of a professor subtly
         | shaming and humiliating his students for answering questions to
         | his own online quiz. He teaches at a prestigious institution
         | and has a book that has a very good reputation. I stopped
         | watching his video lectures.
        
         | easton wrote:
         | > Certainly, and this technology won't help or hinder them any
         | more than a good old fashioned textbook.
         | 
         | Except that the textbook was probably QA'd by a human for
         | accuracy (at least any intro college textbook, more specialized
         | texts may not have).
         | 
         | Matters less when you have background in the subject (which is
         | why it's often okay to use LLMs as a search replacement) but
         | it's nice not having a voice in the back of your head saying
         | "yeah, but what if this is all nonsense".
        
         | everyone wrote:
         | Yeah, I've been a game-dev forever and had never built a web-
         | app in my life (even in college) I recently completed my 1st
         | web-app contract, and gpt was my teacher. I have no problem
         | asking stupid questions, tbh asking stupid questions is a sign
         | of intelligence imo. But where is there to even ask these days?
         | Stack Overflow may as well not exist.
        
       | megamix wrote:
       | "Under the hood, study mode is powered by custom system
       | instructions we've written in collaboration with teachers,
       | scientists, and pedagogy experts to reflect a core set of
       | behaviors that support deeper learning including: "
       | 
       | Wonder what the compensation for this invaluable contribution was
        
       ___________________________________________________________________
       (page generated 2025-07-29 23:00 UTC)