[HN Gopher] AI breakthrough ChatGPT raises alarm over student ch...
       ___________________________________________________________________
        
       AI breakthrough ChatGPT raises alarm over student cheating
        
       Author : mfiguiere
       Score  : 70 points
       Date   : 2022-12-18 23:39 UTC (23 hours ago)
        
 (HTM) web link (www.ft.com)
 (TXT) w3m dump (www.ft.com)
        
       | spike021 wrote:
       | I mentioned this on twitter a week or two ago, but in a lot of
       | ways ChatGPT really reminds me of being in college about 10-15
       | years ago and using Wolfram Alpha a lot. It wasn't difficult back
       | then to put in some challenging math problems or historical
       | questions and get back results I could reference in notes for
       | things.
       | 
       | It's a concern but I feel like it's also just inevitable. There
       | will always be resources that can be used for cheating.
       | 
       | Hopefully students just learn to use these things as a resource
       | rather than a shortcut.
        
       | [deleted]
        
       | kypro wrote:
       | I guess I have a fairly cynical take on this. I think this just
       | exposes that AI is fundamentally going to undermine the value of
       | education in certain fields. The fact some tests can be "cheated"
       | by AI really just suggests that some skills have little to no
       | value in our future AI-prevalent world. Stopping people cheating
       | is going to change that.
       | 
       | In the same way that in the past being able to do fast, accurate
       | calculations in your head or on paper might have landed you job
       | as a "computer", today technology makes the idea of hiring
       | someone to be a computer absurd. And therefore any tests that
       | test someone's ability to multiply and divide large numbers is
       | basically a worthless test in today's world.
       | 
       | Fields where there's value humans can add won't be cheatable by
       | AI (at least today's generations). Tests in these fields will
       | hold value. In some cases it will mean the education and tests
       | will evolve to educate and test areas where humans can still add
       | value - arguably this has already happened in maths. But many
       | fields could be replaced entirely.
       | 
       | I don't mean to sound unsympathetic to those who's jobs and
       | professions will replaced by AI in the coming years and decades.
       | It makes me sad that someone could spend years obtaining and
       | perfecting skills which might soon become redundant, but for
       | better or worse AI systems like ChatGPT will fundamentally change
       | the value of some skills and educational fields. The sooner we
       | accept this the better we can adapt society for it.
        
         | jacquesm wrote:
         | > The fact some tests can be "cheated" by AI really just
         | suggests that some skills have little to no value in our future
         | AI-prevalent world.
         | 
         | That's like saying 'speech has no value'. It doesn't have value
         | on its own, but if you can't speak then suddenly there is a
         | whole pile of stuff you can no longer do. The ability to put
         | your thoughts down in writing and to organize them into a
         | hopefully coherent form is an act that has implications far
         | beyond the writing itself. Besides the fact that it can help
         | you to organize your thoughts in the process. Plenty of times
         | when I write a longer piece I find out halfway that one or more
         | of my assumptions were wrong and then I get to fix that and re-
         | evaluate my position. It isn't rare at all to go through
         | several such iterations for a single text and at the end of it
         | I have a much better insight into the material that I thought I
         | already knew well enough to write about.
         | 
         | Don't underestimate the value of these foundational skills.
        
         | visarga wrote:
         | I give you another anti-cheating solution: test all exams on
         | chatGPT, by giving it first the tasks, then the answers for
         | self grading. If the model fails, use the problem, if not,
         | change it until chatGPT fails. The task should always require
         | human intervention to pass.
        
           | oceanplexian wrote:
           | I think the purpose of exams is deeply misunderstood. They
           | are supposed to be a feedback system for the teacher and the
           | student so the student can improve. They're not a competition
           | or an intelligence measuring stick. ChatGPT is kind of the
           | perfect tool for that; if a student doesn't understand maybe
           | AI can help then get to the right answer and give them
           | personalized attention. Something that IMO teachers fail at
           | due to the nature of large class sizes where teachers can't
           | help tutor each student.
        
         | pupppet wrote:
         | But isn't there value in a student researching and putting
         | their thoughts to (virtual) paper? A lot of this testing is
         | more about the journey than destination, and students are
         | losing these valuable skills by cheating.
        
         | sega_sai wrote:
         | I'm not sure I buy this argument. People still need to learn
         | that 2+2=4, derivative of sine is cosine and similarly simple
         | things in non-STEM fields. Sure these questions can be answered
         | by ChatGPT trivially, but it doesn't mean they don't need to be
         | taught.
        
           | eagleinparadise wrote:
           | That's not the way the system works. You spend a semester or
           | a year going deep about Calculus, Geometry, AP Euro, or
           | whatever. These are all things most people don't need to know
           | unless they specialize..
           | 
           | Perhaps instead, we should have a semester or year classes
           | through 8-12 + college discuss the nature of manias, like
           | analyzing the tulip bubble, crypto bubble, 2008 mortgage
           | bubble, etc. These are all common things that happen over and
           | over again. Lessons which can permanently (and frequently) be
           | called upon.
           | 
           | You can measure and grade people with these topics that
           | aren't officially taught. We just decide to teach less
           | important things
        
         | kenjackson wrote:
         | With schools though you often need to learn things along the
         | way that can easily be cheated with an AI, but the end goal not
         | so much.
         | 
         | For example, writing a basic screenplay can be done with
         | ChatGPT, but it can't write one that will likely be turned into
         | a movie/show.
         | 
         | And automation will continue to make some jobs obsolete and
         | create new ones. For example in youth sports we don't get great
         | coverage of youth/school events. I'm hopeful in the not too
         | distant future we can get game summaries and stats for all
         | games -- and editors will probably be needed to add commentary.
         | That's something I'd love to see.
        
         | mustacheemperor wrote:
         | >any tests that test someone's ability to multiply and divide
         | large numbers is basically a worthless test
         | 
         | Right now, teaching elementary school aged children to solve
         | complex multiplication and division is considered an effective
         | way to teach people critical thinking, persistence, and basic
         | logical reasoning structure. Structured education typically
         | relies on an ability to assess student improvement, so 3rd
         | graders don't get a calculator when they take a multiplications
         | table test, even though we'd all use one at our desk at work.
         | 
         | I would surmise that likewise there is value to students
         | learning to read, analyze, and report insight about a text, to
         | attempt to identify symbolism and patterns, to communicate
         | their ideas effectively, and to be assessed about that - even
         | if by the time they're in the workforce they'll usually just
         | ask an AI to do it.
        
           | musicale wrote:
           | Learning how to multiply and divide large numbers was one of
           | my earliest exposures to numerical algorithms.
           | 
           | I usually use a calculator or Python for large numerical
           | computations, but the algorithmic understanding is something
           | I use every day.
        
         | SpicyLemonZest wrote:
         | The problem is with areas like reading comprehension. The
         | underlying skill is still human-value-added today - asking
         | ChatGPT to produce a summary of your comment isn't anything
         | like a replacement for reading it and understanding it myself.
         | But educators can't read minds to determine if real
         | comprehension is taking place, and the only way they currently
         | know how to measure it is through tests which ChatGPT can
         | cheat.
        
         | teeray wrote:
         | I think that the only solution may be for the essays to be
         | written in controlled environments (school). If AI forces
         | schools to do more school in school and less homework, I count
         | that as a massive win. What I would give for the countless
         | hours of my childhood and teenage years wasted on useless
         | homework...
        
         | comfypotato wrote:
         | I think you meant "stopping people cheating _isn't_ going to
         | change that".
         | 
         | In my friendly/complementary rebuttal, the catch lies somewhere
         | in the phase of learning foundational material. In the same way
         | that traditional mathematics education emphasizes doing
         | problems by hand that could be solved computer, there's
         | something to be said for learning to parse the research
         | material that ChatGPT has indexed when you're writing a term
         | paper. It's worth noting here that math education may be moving
         | away from the rote memorization and routine problem solving
         | that today's math researchers state are essential.
         | 
         | ChatGPT has made it much harder to detect cheating on untimed
         | power tests. The biggest downside I see to this is that it
         | makes an old problem worse. If a prof has no choice but to
         | administer a non-computer (proctored) timed exam, students who
         | take tests slowly or have anxiety disorders are going to
         | suffer.
        
           | oceanplexian wrote:
           | At the end of the day though, it doesn't really matter. The
           | person cheating is failing themselves if the foundational
           | material you're talking about is so important. If they can
           | somehow manage to pass enough courses while using an AI, and
           | then get a degree out of it, or a 4.0 GPA, or whatever then
           | the degree wasn't that valuable to begin with.
           | 
           | This already mirrors the real world pretty well. Outside of
           | your first job or two, nobody really cares what your GPA is
           | or what college you graduated from. In the real world if you
           | managed to cheat with an AI you'd probably get a nice
           | promotion. I don't personally see a downside, other than the
           | demise of certain flawed notions of academic success that
           | deserve to die out anyway.
        
             | Jensson wrote:
             | Lets say we are training ChatGPT2, would you let it use
             | ChatGPT for solving things? Do you think it would learn
             | better if it had to learn itself without relying on
             | ChatGPT?
             | 
             | We learn by doing. Relying on smart tools puts you in a
             | local optima, it is hard for you to improve over those
             | tools, but people who don't rely on tools continue
             | improving and then start using the tools later.
        
               | comfypotato wrote:
               | I was just trying to allude to the nature of the
               | universities' business. (Being assessing that they've
               | taught what they claim to teach.)
               | 
               | My PhD mentor is actually leading a discussion about
               | ChatGPT use by students, and the tone is mostly concern.
               | But it's not all bad; it's an incredible tool for quickly
               | diving into a subject at a surface level and getting your
               | bearings regarding what's important. It saved my ass on
               | one of my finals too when the rest of the questions were
               | harder than I expected and ChatGPT helped me tear through
               | the remaining T/F questions.
        
             | [deleted]
        
         | jelled wrote:
         | You'll still be able to test writing, but going forward it will
         | need to be done in a proctored setting. Similar how you would
         | test grade school math given the existence of calculators.
        
           | ceres wrote:
           | Honest question: how valuable is the current mode of testing
           | (based rote memorization) if much of that can be done by AI
           | anyway? Maybe it's time for new testing methodologies?
        
             | HEmanZ wrote:
             | Where are people testing just rote memorization? At least
             | 20 years ago when I was in lower school, most of what we
             | did had only a very small memorization component. The SAT,
             | ACT, SAT subject exams, etc only barely rely on
             | memorization, and many questions have no memorization
             | component at all. It's why 4th graders can get top-
             | percentile scores on some of these.
             | 
             | You need both memorization and applied reasoning to be
             | taught in school. Without something to apply reasoning to
             | you can't really reason about anything. And without applied
             | reasoning, memorization is pretty useless.
        
         | mrtksn wrote:
         | Yeah, no. Training people doesn't work like that. Education is
         | about training people, we don't utilize the output of the kids
         | and reward them with grades that they can exchange for food and
         | toys.
         | 
         | The whole point of testing is to put them in hypothetical
         | situations and grade their progress with purpose of being aware
         | of their development so we can improve it. Another thing we do
         | is selecting the particularly good ones for further advanced
         | training.
         | 
         | The problem with cheating is that it provides wrong data about
         | their progress, you don't want to end up with a generation that
         | cheated their way up without learning anything.
         | 
         | The selection for further training is probably not that big of
         | a problem, its mostly about supply and demand and ChatGPT won't
         | change that.
        
           | waynesonfire wrote:
           | Do you think Trump and Sam Bankman-Fried took your advice? I
           | need a pool of people to flip burgers and take orders and
           | those people better not take cookies from the jar. For
           | everyone else, it's called creating shareholder value. Your
           | advice makes for great burger flippers.
        
             | raydiatian wrote:
             | You're either a burger flipper or a con man? That's your
             | binary state on mankind?
        
           | yanderekko wrote:
           | >The whole point of testing is to put them in hypothetical
           | situations and grade their progress with purpose of being
           | aware of their development so we can improve it. Another
           | thing we do is selecting the particularly good ones for
           | further advanced training.
           | 
           | You're missing the point. Why are we making them achieve
           | progress on something that is being trivialized by AI? Let
           | them use AI as a complementary tool and test them on
           | something that requires measurable human skill. If this is
           | difficult it's a sign that perhaps you're not teaching
           | anything valuable.
        
             | rocketbop wrote:
             | Respectfully, I think it's you who are missing the point.
             | There is value in teaching humans to communicate and
             | synthesize information. It's possible that in the future
             | the best thinkers and philosophers and historians and
             | political analysts will be AI, but 1/ we are very very far
             | from that, and 2/ it's important for civilisation that we
             | don't leave thinking to the machines.
        
               | mrtksn wrote:
               | Exactly, educations is not a vocational school where we
               | teach people useful skills so they start doing whatever
               | they learned. Machines have long been able to do things
               | that we teach in the schools, the purpose of teaching is
               | developing people. Once they are developed(that means
               | they built understanding of how nature and society works
               | and how to interact with) and its time to expect an
               | output, only then we can talk about teaching useful
               | skills.
               | 
               | Vocational schools do exist and they are useful but
               | that's not what we are concerned about. People who cheat
               | not to learn useful skills simply spend their time and
               | money.
        
             | drdeca wrote:
             | Curriculum learning. It is pedagogically useful to teach
             | simpler and easier versions of something before teaching
             | the harder versions.
             | 
             | While you likely always have a calculator with you which
             | can do division much more quickly than you can do long
             | division, if you want to teach students more advanced math,
             | it is nonetheless useful to have previously taught them
             | long division of numbers, so that you can teach them long
             | division of polynomials.
             | 
             | If students haven't learned how to do long division of
             | decimals, and how that works, it is harder to teach them to
             | do long division of polynomials.
             | 
             | Of course, nowadays there are computer programs which can
             | do long division of polynomials more quickly than a person
             | can, but this is just an example.
             | 
             | Suppose there are a sequence of levels, where for each
             | level n, it is much easier to teach someone level n+1 if
             | they already have an understanding of level n. If computers
             | can do levels up to level k faster than humans can, it may
             | still be useful to start teaching someone at level 1 and
             | then proceed to level 2, and so on, even though computers
             | can already do those levels, until finally it is time to
             | teach them level k+1 which we have not yet managed to teach
             | computers to do.
        
             | varajelle wrote:
             | I think it's the same reason why we are teaching assembly
             | even if there are compilers. Or teaching Latin even though
             | it's a dead language
        
           | blobbers wrote:
           | This is an interesting point of view.
           | 
           | I always viewed grades as a way of establishing intellectual
           | hierarchy, which probably wasn't the most healthy way to look
           | at them; a way of deciding where amongst my peers I stacked
           | up. In University, we had class ranks.
           | 
           | Those with the best grades and thus highest rank would be
           | given the best opportunities for wage-slave class work.
           | Eventually people hit University and split up according to
           | passion, goals as well as grades. Of course, there were
           | exceptions; entrepreneurs were the folks that buck the system
           | and don't need grades to succeed. In a sense, the grades
           | _were_ exchanged for food stamps and toys, only later in
           | life. The winners drove the lambos and the losers are still
           | living in their parents ' basements.
        
             | mrtksn wrote:
             | > Those with the best grades and thus highest rank would be
             | given the best opportunities for wage-slave class work
             | 
             | I don't believe that this observation is true. Those who
             | got into tech, bitcoins etc. at the right time got the
             | lambos - not the most brilliant ones. Tech is full people
             | with inert brains who make much much more mone than some of
             | the most brilliant scientists out there.
             | 
             | It has been good time to be into plumbing instead of
             | astrophysics if you are optimising for money BTW. Just like
             | knowing ReactJS has provided better income opportunities
             | than being rocket scientist.
        
           | ospray wrote:
           | Which means testing student under supervision and letting
           | them us AI for assignments is probably going to work fine.
        
             | mrtksn wrote:
             | Sure, I also find AI and all other devices that we already
             | had(calculators, programming languages, simulators etc)
             | very useful for learning. Incorporating the latest and
             | greatest tools into education works well.
        
           | hosh wrote:
           | Before ChatGPT, there has been criticism education as we know
           | it today. The notion of "education" has not always been about
           | training, and putting someone in a hypothetical situation and
           | grading progress -- indeed, the measurement of progress
           | itself -- is not present in all notions of education, or all
           | methods of learning, or even all systems of determination of
           | truth (epistemology). Behavioral conditioning with reward
           | incentives are also not the only method in education.
           | 
           | One of the interesting voices about this John Taylor Gatto.
           | Mainstream methods of "training" as you described them, may
           | indeed be the most common method today, but it is by no means
           | the most effective, and Gatto goes into detail about it in
           | his book, _The Underground History of American Education_.
           | 
           | For example, an older notion of education is not so much to
           | train people to mechanically repeat results (like you would
           | train people in a factory), but rather, to develop informed
           | and educated citizens that are able to reason, think
           | critically, and participate meaningfully in a republic.
           | There's little point in having a voting body of citizens
           | unable to think for themselves, and are merely regurgitating
           | ideas they hear.
        
             | mrtksn wrote:
             | Well that's a strawman argument right there. Obviously by
             | training I don't mean perfecting repetitive tasks or
             | anything of that sort. If that was the case, I would have
             | advocating on training prompts or fast typing or something
             | of that sort.
             | 
             | You can check my other replies in this thread to understand
             | better.
        
           | aeternum wrote:
           | Training people (or anything) using a flawed objective
           | function is not useful.
           | 
           | Writing essays, specifically expanding a list of well-
           | reasoned bullet points into communicative written prose will
           | be a worthless operation in the future, and likely already
           | is.
        
             | mrtksn wrote:
             | Have you not noticed that when you write something down
             | with the purpose of summarising it in a structured way you
             | tend to understand it better?
             | 
             | There's no market for kids essays, we never made them write
             | these things with the purpose of being useful output. The
             | market for essays for grown ups is also very small, making
             | kids write things down was not with the purpose of covering
             | the case of them becoming journalists or something. It's
             | just that makes them learn to think and structure their
             | thought process. That's why people take notes and write
             | down things even if its not going to be graded or read by
             | anyone else.
        
               | hosh wrote:
               | Agreed. Yet many students, parents, and even teachers
               | lost sight of this purpose.
        
             | lemmsjid wrote:
             | There is a progression though.
             | 
             | For students, the objective is to write what to an adult
             | would be mediocre essays, because they are on a path to
             | writing adult level essays.
             | 
             | The AI can write mediocre essays, but not adult level
             | essays.
             | 
             | Assuming that a student must write a bunch of mediocre
             | essays in order to progress past the AI to write adult
             | level essays, then they need to be evaluated on writing a
             | bunch of mediocre essays.
             | 
             | If the teacher cannot distinguish between AI output and the
             | student's output, and the result is that we give up on
             | evaluating mediocre student essays, then the students are
             | ultimately inhibited from progression to adult level
             | essays, which, currently, are superior to the AI's output.
        
             | wwweston wrote:
             | Automatability _may_ make an output ubiquitous to the point
             | where marginal value for any given output is small, but if
             | it was really  "worthless" no one would bother automating
             | it in the first place.
             | 
             | We automated a lot of arithmetic and then some higher math
             | decades ago, but that hasn't meant that knowing how to do
             | it is worthless.
             | 
             | Effective communicative prose is an inherently high-value
             | part of coordinating human activity. The number of roles
             | exclusively dedicated to it _might_ shrink but it 's part
             | of nearly every activity in a specialized society. What's
             | most likely is that people will use it as a time-saving
             | tool but everyone will be evaluating how good the output is
             | and tweaking it. And that means until we're at a point
             | where people trust AI to actually evaluate the value of AI
             | output, many of these roles (and certainly anyone building
             | AIs that do this) will need to know how to recognize good
             | output -- and the training involved in that probably
             | involves practice in personally doing it.
        
           | darepublic wrote:
           | > you don't want to end up with a generation that cheated
           | their way up without learning anything.
           | 
           | This sounds like kind of a good summary of what we are doing
           | to ourselves as we speak. Case in point: the crypto scandal,
           | the housing bubble etc. A ton of people made out like
           | thieves.
        
         | kube-system wrote:
         | Someone who can use a calculator to solve a problem, still has
         | to understand the concepts being used in order to know what
         | buttons to press on the calculator. They will have learned what
         | those calculations mean, and how they work.
         | 
         | Using an AI to crap out an essay with no understanding of the
         | topic is not the same thing. While an essay is the way we test
         | students, it is a stand in for measuring _understanding_ ,
         | which is the ultimate goal.
        
         | jimbokun wrote:
         | This is understating the problem.
         | 
         | In months or years, it's not clear if any human will be able to
         | provide economic value, relative to state of the art AI.
         | 
         | I don't see where any fundamental barriers lie between current
         | AI and surpassing humans across the board. What can humans do
         | that is unachievable by AI in the foreseeable future, given the
         | current rate of advancement?
        
         | seppel wrote:
         | > I guess I have a fairly cynical take on this. I think this
         | just exposes that AI is fundamentally going to undermine the
         | value of education in certain fields. The fact some tests can
         | be "cheated" by AI really just suggests that some skills have
         | little to no value in our future AI-prevalent world.
         | 
         | If you think about it, most of the stuff you learn at school
         | (beyond reading and writing) has not much value apart from
         | certifying that you are not stupid and compliant to do stupid
         | tasks.
        
           | gonzo41 wrote:
           | I had to rearrange and equation at work yesterday. I let
           | everyone know that 9th grade was not a waste.
        
           | AnIdiotOnTheNet wrote:
           | Really what we need to do is take a good hard look at how
           | education system works and whether or not it is aligned with
           | our goals. Sadly, that's a pretty massive undertaking and no
           | one can be arsed to support doing difficult things on a
           | societal level any more.
        
       | phonescreen_man wrote:
       | My son came to me the other day asking about some properties of
       | material question for his engineering BTEC, it just happened the
       | night before I had been playing around with chatGPT for ffmpeg
       | commands and stable diffusion prompts. I told him about it and he
       | got right on it. It answered the question for him with plenty of
       | extra info he had not considered. He tried to add it as a
       | reference ,the instructor said it was not allowed. However the
       | instructor who had never heard of chatGPT was so impressed he
       | started using it on all the questions to see how good it is. Also
       | so he can detect others using it.
       | 
       | It's a game changer and a really great tool. If it helps people
       | learn then surely that must be a positive.
        
       | DeWilde wrote:
       | If a technology can make an educational method obsolete maybe it
       | is time to rethink the educational method?
       | 
       | And if students can use a technology like ChatGPT to complete
       | their education, are they not then prepared for real life tasks
       | and situations that they can also complete using ChatGPT?
        
       | hiidrew wrote:
       | I successfully used a GPT-2 output identification tool to see
       | whether something was generated using ChatGPT. I think it only
       | works if the text has a minimum number of characters. I imagine
       | it's only a matter of time until Turnitin integrates something
       | like this, or some other start-up comes along in edtech. Either
       | way, the continuous arms race on generative media /
       | identification of it will be fascinating to watch as it unfolds!
        
         | hiidrew wrote:
         | I think this was the tool I tested it out-
         | https://huggingface.co/openai-detector
        
           | ivegotnoaccount wrote:
           | Well, as explained in some of my other comments, it doesn't
           | seem that effective. Most GPT-generated garbage will probably
           | be GPT-3 based, which doesn't seem to too often trigger as
           | "fake" (got more "human" results when I tested it with
           | ChatGPT), while on the other hand, it says that several of my
           | comments are fake with >99% certainty.
        
             | hiidrew wrote:
             | I see, that's obviously an issue. I only tested it with
             | something I generated and someone's long form blog post.
        
       | Julesman wrote:
       | 100% not worried.
       | 
       | There are some real short-term issues. Teachers who don't know
       | anything about it. Students who think the teacher won't notice
       | that their illiterate self got smart over the weekend.
       | Legislators using it as a political football.
       | 
       | But ultimately before long text AI will be good enough to fool
       | anyone. And you will be able to feed it your style and it will
       | spit out text appropriate for your grade level, just good enough
       | for the A. And cheating will happen.
       | 
       | Pandora's box is not going to be closed.
       | 
       | Does that spell universal illiteracy and the end of higher
       | learning? lol. Nah. Testing will certainly have to be creative.
       | 
       | Just off the top of my head. Have every student write one
       | paragraph during a whole period and make it as good as they can.
       | That creates a baseline. And, you could actually use an AI (or
       | not) to measure future results against it and map growth.
       | 
       | We will figure this out and AI, at least for a time, will be a
       | good thing. Quantum Computer AI that can predict the future...
       | not quite as excited about that one. heh.
        
       | knaik94 wrote:
       | Arguing what should and shouldn't count as cheating misses the
       | bigger picture. The question is how will school assessments adapt
       | to effectively evaluate how well kids learn?
       | 
       | I am not happy that ChatGPT and other LLMs are the catalyst, but
       | I believe the education system is archaic relative to the
       | technology we have. Even at high school+ levels, lines of code is
       | not seen as an indicator of productivity but word count and pages
       | written are.
       | 
       | I genuinely see ChatGPT as being less useful than Wolfram Alpha
       | for a lot of STEM applications. It will make regurgitating things
       | like, fill in the blank questions, trivial but those questions
       | already fail at ensuring kids pay attention or learn anything.
       | 
       | On the other hand, ChatGPT makes a Literature/Writing teacher's
       | life very difficult, since the main method of take home
       | assessments become trivial to complete without knowing the
       | underlying material. It was accurately able to answer "What does
       | Big Brother symbolize in George Orwell's 1984 and how does it
       | apply to modern day? Include 4 parallels in your answer, and try
       | not to talk about obvious ones." I followed up with "Go into
       | detail about the first bullet and include as many details as
       | possible." and I will include just the first paragraph of the
       | response, "Propaganda and manipulation of information is a key
       | aspect of the society depicted in George Orwell's 1984. The
       | government in the novel controls all forms of media and uses it
       | to disseminate propaganda that promotes the ruling Party's
       | ideology and demonizes its enemies. The Party's slogans, such as
       | "WAR IS PEACE, FREEDOM IS SLAVERY, IGNORANCE IS STRENGTH," are
       | repeated constantly in order to instill them into the minds of
       | the population and reinforce the Party's control." I followed up
       | for each bullet point and it was able to give reasonable quotes
       | for each. I asked it to give a different quote, and it produced a
       | reasonable alternate supporting quote.
       | 
       | The only "issue" I've noticed is that it's a little redundant,
       | but I don't know how different that is from high schoolers being
       | overly verbose to meet a word count goal. The cyclical way of
       | presenting ideas makes copy-paste responses easier to spot,
       | because ChatGPT has a pattern of Restate Question, Give Quote,
       | Explain in a few sentences, Restate Question. But I doubt I would
       | be able to tell with more than 70% confidence when read out of
       | context, and I think I would be fooled in 95% of cases if the
       | words are changed and some ideas are expanded on.
       | 
       | But essay writing and even college admission writing is going to
       | be full of essays where ChatGPT helped. The only solution I see
       | is pushing for all take home assessment to require a presentation
       | portion. In class essay writing assessments will require
       | everything to happen during class time. I don't really know how
       | well schools can police access to ChatGPT when kids start getting
       | cell phones in middle school. It would only take one shared login
       | to create difficulty for teachers. Language models outside of
       | ChatGPT already exist that will function like a smart writing
       | assistant that can generate sentences for you, like OPT or GPT
       | Neo.
       | 
       | Covid already cause a large trend of current middle and high
       | schoolers struggling to read at a level below their grade. There
       | are studies supporting and disagreeing with this conclusion, and
       | I imagine the effects will only be clear in hindsight. My
       | experience while tutoring is that distance learning caused a lot
       | more students to struggle but is only a small part of a larger
       | trend. Kids have to grow in a much more difficult environment
       | that's constantly trying to exploit their attention. Video Games
       | are more addictive, social media is more readily available,
       | friendships and gathering have shifted online, schooling has
       | shifted online, Covid meant teachers struggled to identify
       | lagging children, school incentives mean that kids don't really
       | get held back a grade, and the most recent complication is
       | ChatGPT being able to answer reading comprehension/essay
       | questions at a high school level.
       | 
       | The opposite, positive, thing is that ChatGPT makes information
       | accessible to kids that come from underprivileged backgrounds. I
       | trust ChatGPT will be able to answer middle school to early high
       | school level STEM questions accurately, or a Math focused model
       | will be released, similar to what the PubMedGPT LLM is attempting
       | to do by being trained on pubmed biomed abstracts and portions of
       | PILE. ChatGPT fails in terms of numbers, but that feels like a
       | speedbump, especially considering the current capabilities of
       | Wolfram Alpha.
       | 
       | I plugged a generic college admissions essay prompt into it.
       | There clearly are lots of tropes, but the framework of a good
       | essay is there. I wonder how something like this will affect
       | international student applicants who learned English as a second
       | or third language and struggle with it. Standardized testing like
       | SAT and GRE was becoming de-emphasized but I can see this causing
       | the opposite effect.
       | 
       | "Some students have a background, identity, interest, or talent
       | that is so meaningful they believe their application would be
       | incomplete without it. If this sounds like you, then please share
       | your story. Respond like a graduating high school senior applying
       | to Ivy League Colleges. As a graduating high school senior
       | applying to Ivy League colleges, I feel that my background,
       | identity, interests, and talents have all contributed to making
       | me the person I am today and have shaped my aspirations for the
       | future.
       | 
       | Growing up in a low-income, immigrant household, I have faced
       | many challenges and obstacles that have taught me resilience,
       | determination, and the importance of hard work. These experiences
       | have also given me a unique perspective on the world and a deep
       | appreciation for diversity and inclusivity.
       | 
       | My interests and passions have always centered around social
       | justice and advocating for underrepresented communities. I have
       | been actively involved in various community service and activist
       | organizations, working to address issues such as racial
       | inequality, immigrant rights, and LGBTQ+ rights. I have also
       | excelled academically, achieving high grades and participating in
       | numerous research and leadership opportunities.
       | 
       | As I look to the future, I am excited to continue pursuing my
       | passions and making a positive impact on the world. I believe
       | that an Ivy League education would not only provide me with the
       | skills and knowledge necessary to succeed, but also give me the
       | opportunity to be part of a community of like-minded individuals
       | who are committed to making a difference. I am confident that my
       | background, identity, interests, and talents will make me a
       | valuable addition to any Ivy League college, and I am eager to
       | share my story with you."
        
       | bestcoder69 wrote:
       | How is chatgpt, in particular, the breakthrough? Seems like you
       | can get similar results in other models, right?
        
         | amanj41 wrote:
         | It's an accessible interface that has gone viral enough for
         | non-technical students to catch wind of & use it.
        
         | makestuff wrote:
         | IMO searching on stack overflow/googling requires some level of
         | technical understanding. Ex: You can't just paste your homework
         | problem into google and get an answer. However, ChatGPT is just
         | a better/high level of abstraction of google search. You can
         | 100% paste in a full leetcode question and get an answer.
         | Obviously it is wrong sometimes, but in the next few years it
         | will become more accurate.
         | 
         | This is no different than the people I was in CS with who would
         | just copy other people's projects/homework though. If you want
         | to use ChatGPT for 4 years in college and then are able to get
         | a job and do the job somehow I don't really care. I would be
         | surprised if you could do the job well, but I am sure some
         | people will be able to.
        
       | neatze wrote:
       | Wonder if it is possible to use fiverr to cheat ?
       | 
       | (seems like it, I would be very surprised if it is not used for
       | many years, already)
        
       | cptcobalt wrote:
       | ChatGPT is just like a calculator. We should allow it, but you
       | should be able to prove you still know the material. This is
       | where things like written exams still win (but not everything has
       | to be a written exam).
       | 
       | My takeaway from this is: good riddance, just fix the way we
       | teach students instead. The most painful part of school for me
       | was the real awareness that not every question is valid, and most
       | often the throwaway work given to you (vs. that of, say, exams)
       | was most often the most junk & inane work you could do. (NB: I've
       | got ADHD, and for me that worked out to always underperforming
       | and missing homework but doing great on exams.)
       | 
       | How can the state of education reinvent itself to embrace
       | ChatGPT, computer science, and yes, even calculators--teachers
       | need to stop plugging their ears and pretending these _good
       | tools_ don 't exist.
        
       | andrewallbright wrote:
       | Something I've realized after using chatGPT since the preview
       | released is that I am still responsible for knowing what the
       | possibility space is for what I want to do.
       | 
       | This helps in two ways. First, it helps me formulate my requests
       | of ChatGPT. Second, it helps me discover incorrect output which I
       | can then either fix myself or make a subsequent request of
       | chatGPT.
       | 
       | I consider ChatGPT an extremely eager junior dev who makes
       | mistakes by moving too quick in this slice of time. (Im sure
       | it'll get much better very, very soon).
        
       | [deleted]
        
       | jabthedang wrote:
        
       | drdaeman wrote:
       | https://archive.is/BesJA
        
       | periheli0n wrote:
       | Computer science, and academia in general, has always adapted to
       | technical progress, although slowly.
       | 
       | In the case of ChatGPT and similar LLMs, these should become part
       | of the toolbox that students are being taught. I.e., how does it
       | work, what can it do, what are its limits, how can it be used to
       | help solve a problem or complete a task.
       | 
       | An exam question could then be e.g. to ask ChatGPT for an essay
       | on a topic, critically discuss its shortcomings, and improve the
       | essay e.g. by adding references and deeper discussion, which will
       | be graded.
       | 
       | Alternatively, use ChatGPT to iteratively discuss and improve the
       | essay. The whole chat transcript should be included and will be
       | graded.
        
       | jeffrallen wrote:
       | I asked ChatGPT to tell me of examples were more likely from a
       | large language model or for a student. It failed miserably on the
       | job. Like I asked it to write a short paragraph on Shakespeare
       | and then asked it to determine if the paragraph came from it. It
       | said that the paragraph (that it generated 3 prompts before) was
       | written by a student!
        
       | paulpauper wrote:
       | It's always students who are blamed for cheating or laziness, but
       | never the administrators, teachers, etc. for failing to create
       | sufficiently engaging work or curriculum that students do not
       | feel compelled to cheat in the first place. Yes, some cheating is
       | always inevitable ,and it's unfair to kids who do not cheat, but
       | why it the framing always one-way. It's like productivity,
       | automation, and collaboration is valued in corporate America, but
       | it's the opposite in school.
        
         | knaik94 wrote:
         | I don't know how many teachers or administrators will say it
         | outright, but LLMs and ChatGPT assisted essays are a lot more
         | difficult to detect. ChatGPT is accurate enough to answer many
         | typical reading comprehension questions at a high school+
         | level.
         | 
         | Students will always feel some incentive to cheat, even if they
         | wouldn't in isolation, they could feel pressured based on their
         | peers. A strong curriculum isn't enough to encourage every
         | single student not to take shortcuts. Despite all external
         | circumstances, it's still an active choice to cheat. Cheating
         | is framed one-way because it's generally discussed imagining
         | the laziest and most adversarial student possible, similar to
         | how you'd red team for security testing.
         | 
         | I think the concern is things like ChatGPT will lead to a lot
         | more students cheating. I think it's human nature to want to
         | take shortcuts. I agree with you that the way in which cheating
         | is demonized in a school environment is counterproductive in
         | the long run. I think admin and teachers have hid behind
         | students' fear of academic integrity violations being reported
         | on a college application to not have to admit that most are
         | terrible at catching cheaters. It's from the perspective of
         | imagining ignorant teachers and adversarial students. Most
         | minor cases of cheating aren't caught, even at a college level.
        
       | dougSF70 wrote:
       | If the students were studying loquaciousness then ChatGPT would
       | give them an advantage, otherwise while the output is good is
       | probably a 'C' grade rather than an 'A' grade.
        
       | nickelcitymario wrote:
       | I've heard this concern a lot lately. It's understandable. But I
       | think it's shortsighted.
       | 
       | Does using a calculator make you a cheater at math? No, you still
       | needed to understand the concepts.
       | 
       | Does using ChatGPT make you a cheater at school? While ultimately
       | that's up to the schools to decide, I would argue it shouldn't.
       | Because you need to have enough understanding to ask the right
       | question as well as to be able to spot what's wrong in the
       | answer.
       | 
       | For example, I was helping my kid out with their Java homework
       | and we we're both stuck for a good hour. Finally I loaded the
       | question into ChatGPT.
       | 
       | The answer that came back helped us solve the problem. But we
       | didn't just cut and paste. We looked at that solution and
       | compared it to our own to find the problem.
       | 
       | I don't consider that cheating. Others may feel differently.
       | 
       | Ultimately, I think of this as augmented thinking. In the real
       | world, we all use whatever tools we have at our disposal. If in
       | the real world we have access to Google and calculators and now
       | AI chatbots, why should we train and educate ourselves as if
       | those don't exist?
       | 
       | I'd rather my kids learn to use every tool at their disposal to
       | be as fast, efficient, and effective as possible. And unlike
       | paying someone to do your homework, this is something everyone
       | can do, not just those with discretionary income. So I really
       | have no problem with this at all.
        
         | jackson1442 wrote:
         | > this is something everyone can do, not just those with
         | discretionary income
         | 
         | For now. The site says it's currently free because it's in a
         | limited research preview. I'm curious what you would think if
         | this was a $20/month subscription instead?
        
         | tedunangst wrote:
         | Is it cheating to pay someone to take the test for me? Isn't
         | hiring specialized labor just another kind of tool? In the real
         | world, I certainly have access to all sorts of specialists.
        
           | moralestapia wrote:
           | Yes, but the point of school is to train _you_ to be able on
           | these things. If you fail to see value in that, then why are
           | you going to school in the first place?
        
         | jimbokun wrote:
         | > For example, I was helping my kid out with their Java
         | homework and we we're both stuck for a good hour. Finally I
         | loaded the question into ChatGPT. > The answer that came back
         | helped us solve the problem.
         | 
         | But maybe the real lesson in ChatGPT, or a near future
         | descendant, is replacing human programmers altogether?
        
           | mort96 wrote:
           | As much as I would love that, I have yet to see an AI system
           | which tries to solve the actually hard problems of
           | programming.
        
         | a4isms wrote:
         | I agree. This reminds me of the argument that being able to use
         | web search during a "coding interview" is cheating.
         | 
         | My stance is that if web search can render the difference
         | between a competent and incompetent candidate undetectable, the
         | problem is the interview task, not access to web search. (Not
         | to mention problems with coding interviews in general.)
         | 
         | I'll go out on a limb and say the same general principle
         | applies here: If ChatGPT can pass a test, the test is measuring
         | the wrong thing.
        
           | musicale wrote:
           | > if web search can render the difference between a competent
           | and incompetent candidate undetectable, the problem is the
           | interview task, not access to web search
           | 
           | ;-)
           | 
           | My take is that the problem of distinguishing between
           | competent and incompetent candidates in 20 minutes is hard
           | (if not impossible), and interviewers may not be able to do
           | so reliably.
        
             | a4isms wrote:
             | Your take appears to be a generalization of my take in at
             | least two axes:
             | 
             | 1. Asserting that it's hard if not impossible to generate
             | valuable signal, where I am speaking only to the case where
             | access to web search makes it hard if not impossible to
             | generate valuable signal, and;
             | 
             | 2. I suspect you are also factoring in a very thorny
             | problem, which is not just detecting candidates who are
             | attempting the interview in good faith but are incompetent
             | at the task given, but also detecting interviewers who are
             | gaming the system by memorizing solutions to popular tasks.
             | 
             | The latter is a very hard problem.
        
         | eternalban wrote:
         | > I don't consider that cheating. Others may feel differently.
         | 
         | This is the wrong focus, imo. You can ask "is it cheating?" as
         | you do. Alternatively you could ask "but is this learning? Is
         | it helping my kids grasp the subject better?"
         | 
         | Tools, "augmented thinking", etc. are all concerns regarding
         | getting something done. But the goal for your kids is learning.
        
         | verdenti wrote:
         | Don't be silly. There's no way to do a writing course without
         | extensively writing about a prompt.
         | 
         | With chatGPT just plug it in and out pops your finished essay.
         | 
         | You have learnt nothing besides some minor comprehension
         | skills.
         | 
         | Same with programming courses. It will answer all basic coding
         | prompts. You cannot learn by just reading through a solved
         | solution no matter how much you want to make the case for it.
        
         | onetimeusename wrote:
         | There are different definitions of cheating. You have to look
         | at student handbooks or a teacher or professor's syllabus to
         | find out what constitutes cheating. Although I think designing
         | a test around the possibility of people having access to
         | ChatGPT will have to be included in the future.
         | 
         | But it absolutely could be the case that using ChatGPT is
         | considered cheating like in a case where students are forbidden
         | from using any other resources. OTOH, for tests that were
         | previously "open internet" I assume ChatGPT is permitted.
         | 
         | An interesting point of contention here could be if the teacher
         | says you cannot collaborate with anyone else. Does ChatGPT
         | count as a person here? I would think the intention here is to
         | restrict use of ChatGPT but it is not considered a person
         | traditionally.
        
         | BoiledCabbage wrote:
         | > Does using a calculator make you a cheater at math?
         | 
         | Yes, yes it does is you're testing basic arithmetic.
         | 
         | The difference between a calculator and ChatGPT is the scope of
         | problems it solves for you.
         | 
         | If you could read your math problem aloud to your calculator
         | and it could solve it showing each step along the way people
         | would clearly see it as cheating. It can't it can only do
         | simple arithmetic, so you still need to translate the
         | requirements to to understanding, determine an algorithm and
         | then perform it with the calc doing the lowest level
         | operations.
         | 
         | ChatGPT does the equivalent of this (ironically for non-math
         | only). There is no "higher level" work for a person to do. It
         | does it all. It's only limitation right now is that it's still
         | under development.
         | 
         | ChatGPT can't do math, but let's say it fixes it's math bug
         | soon. You can't come up with a type of high school or undergrad
         | math problem it can't do. I can generate python code, it will
         | be able to generate a proof.
         | 
         | And math is the hard one. Something like "write a few
         | paragraphs discussing the initiating factors for World War I
         | will be trivial.
         | 
         | If someone is going to claim ChatGPT (v2 or v3) doesn't
         | completely upend education, then give an example of a type of
         | question that it will be inherently unable to solve for you
         | that people will still need to do.
        
           | turtledragonfly wrote:
           | > an example of a type of question that it will be inherently
           | unable to solve for you that people will still need to do.
           | 
           | Something like "tell me about your day so far" or "describe
           | some important experiences in your life." Obviously, you can
           | use ChatGPT to answer those questions, but they won't be true
           | answers, since ChatGPT doesn't know you.
           | 
           | Of course, there is the issue of _verifying_ those answers --
           | probably the teacher won 't be calling the student's parents
           | to make sure it's accurate (:
           | 
           | As objective knowledge gets increasingly captured by external
           | systems (search, maps, image generators, etc), subjective
           | knowledge and personal experience remain out of its reach. I
           | wonder if this could push us in a direction of valuing our
           | personal life experiences more highly, as the other stuff
           | becomes increasingly commoditized?
        
           | unusualmonkey wrote:
           | > If you could read your math problem aloud to your
           | calculator and it could solve it showing each step along the
           | way people would clearly see it as cheating.
           | 
           | Such calculators exist - for example
           | https://www.wolframalpha.com/.
           | 
           | > If someone is going to claim ChatGPT (v2 or v3) doesn't
           | completely upend education, then give an example of a type of
           | question that it will be inherently unable to solve for you
           | that people will still need to do.
           | 
           | ChatGPT is fairly superficial. The challenge will be to
           | transition education from superficial regurgiation to deeper
           | understanding.
           | 
           | Or to put it simply, writting an essay/code/etc isn't good
           | enough, you know need to do it better than ChatGPT.
        
             | 4bpp wrote:
             | Can you gain deeper understanding without first gaining
             | superficial understanding, though? And if not and our
             | current method of imparting superficial knowledge breaks
             | down, wouldn't there be a pipeline problem because much
             | fewer students would get to the point where you can start
             | teaching them deep understanding?
             | 
             | (Imagine a scenario in which no assessment is possible in
             | any mathematics course below the level of differential
             | geometry. Would "we just have to switch to teaching
             | students advanced math instead" be a solution?)
             | 
             | Something along the lines of this is already a problem at
             | US universities, as students chegg, stackexchange and
             | collaborate their way through up-to-junior courses and then
             | are so underprepared in senior-level ones that really 80%
             | of a given class ought to be failed if this were
             | politically feasible. At least though the current situation
             | is more due to a lack of will than a lack of way to stop
             | the cheating, so students are under some pressure to not
             | make it too egriegious.
        
         | jupp0r wrote:
         | "I'd rather my kids learn to use every tool at their disposal
         | to be as fast, efficient, and effective as possible."
         | 
         | But that's not how (pre college) education works for the most
         | part, unfortunately. It's lots of fact learning and essay
         | busywork, not a lot of actual problem solving and critical
         | thinking.
        
           | sc2862 wrote:
           | > lots of fact learning and essay busywork, not a lot of
           | actual problem solving and critical thinking
           | 
           | busywork and no problem solving/critical thinking? sounds
           | like they're being groomed for middle management!
        
             | lancesells wrote:
             | In the US I think it actually comes from a factory work
             | mindset where everyone does the same kind of thing and
             | learns the same way. The system is at least 50 years behind
             | but in my experience I think teachers are much more attuned
             | to the present.
        
         | comfypotato wrote:
         | Speaking as someone who just took a take-home exam and used
         | ChatGPT to complete it (documented, checked with the prof
         | before, etc.) I may have some insight here. Also relevant: my
         | PhD mentor is leading a synchronous/asynchronous discussion
         | among the local academia folks regarding this specific concern.
         | 
         | The short takeaway is that it's a huge problem when it comes to
         | test taking. Untimed power tests are the gold standard for
         | assessing student knowledge. The epitome of this kind of test
         | (in any discipline) is a take home exam with extensive
         | short/long answer questions. The test is open-internet, open
         | notes, open everything, except for collaboration. It has been
         | proven for a long time that this is the best way to assess
         | whether or not a student has learned the material. The worst
         | alternative is a stressful in-person exam that is closed-
         | everything. This alternative produces _many_ false negatives.
         | Anxiety, slow test taking, etc. cause students who know the
         | material to perform poorly.
         | 
         | The issue is detecting cheating. It's very easy for teachers to
         | administer said worst alternative. An untimed power test, on
         | the other hand, is extremely labor-intensive to produce/mark.
         | Also, cheating is detected by comparing answers between
         | students. This adds another level of complexity compared to
         | multiple choice or the simpler short-answer questions that are
         | delivered during a timed exam.
         | 
         | ChatGPT introduces a layer when it comes to detecting cheating
         | that is currently looking like it's going to hurt students. On
         | its current track, it's going to make untimed power tests much
         | harder to produce and administer. They're already so difficult
         | to create that most professors just opt for the simpler timed
         | exam.
         | 
         | In an entry-level research-oriented graduate class, the point
         | is to learn the foundational material so that you can progress
         | to more abstract levels. ChatGPT is making it much harder to
         | assess these classes. As far as testing whether someone can
         | solve a more technically-oriented problem that would be seen in
         | industry, I'm with your interpretation.
        
         | jhbadger wrote:
         | Also, math changed after calculators became ubiquitous and
         | questions became more about the concepts (which calculators
         | don't help) rather than the arithmetic. ChatGPT seems to be
         | good at reciting facts (that is, when it doesn't get them
         | hilariously confused on occasion), but not so much at making
         | the sorts of synthesis that a good essay entails.
        
           | jackson1442 wrote:
           | This, in my opinion, is a very good thing. Learning _should_
           | be more about synthesis than about fact memorization anyways,
           | it's recognized as one of the highest forms of learning under
           | Bloom's Taxonomy.
           | 
           | While yes, it means that our education system will need to
           | adapt, I hope it also means we'll be teaching our students
           | better because they'll be encouraged to learn at a deeper
           | level.
        
         | ericmcer wrote:
         | I think the comparison to a calculator isn't fair because a
         | calculator serves a very discrete purpose. Your brain can
         | compartmentalize a calculators function.
         | 
         | I have been using ChatGPT to code & write copy for a week and
         | its abilities are so broad that my brain couldn't really slot
         | it into a specific area. The result is my brain started
         | reaching towards it every time it felt strained. I had a
         | similar thing happen when I was googling a lot of info for work
         | and then found myself considering searching for things like
         | "when is my dads birthday?". My mind couldn't slot its function
         | into a specific area like it can with a calculator.
        
       | out_of_protocol wrote:
       | > cheating
       | 
       | Check this gallery out: https://www.reddit.com/gallery/zju817
       | 
       | "Solve x = x ^ 2" and a lot of different answers, most of them
       | looks valid (with explanation!) but not correct
        
       | rlt wrote:
       | Between the COVID learning gap and this transitory period when
       | education has to adapt to AI cheating, there's going to be a few
       | rough years of kids falling behind.
        
       | photochemsyn wrote:
       | The cheating cycle in academics is kind of interesting. A few
       | years ago when getting into programming I took an assembly
       | language course, basically a feeder program at a community
       | college for students transferring to 4-year CS programs. The
       | final project was some ridiculously complicated program written
       | in MIPS assembly with lots of functions and custom system calls,
       | that you were supposed to debug and add to. Hand-editing saved
       | registers on the stack and so on.
       | 
       | I didn't care at all about the grade, as I was just there to
       | learn the material (and was irritated about learning MIPS instead
       | of something useful like RISC-V), but I tried working through it
       | for a while, an incredibly tedious exercise. Then I talked to a
       | few other students, and literally every single one got the code
       | off some online site. It was auto-graded via online submission,
       | so they got their good grade and moved on.
       | 
       | Imagine a few iterations of this: the teachers are thinking "yup,
       | the students all get it, we can even make the problems harder to
       | get a better distribution of grades", and the students keep
       | finding better ways to avoid having to spend 20 hours working
       | through a complicated tangled mess of code.
       | 
       | Are the students learning anything of value? Sure, they're
       | learning how to play the system for their own benefit. They're
       | learning that the appearance of being intelligent and successful
       | is more important than the reality (see Sam Bankman-Fried, for
       | example) when you're stuck inside a corrupt system.
       | 
       | This of course explains the coding interview in the software
       | industry: everyone knows this is how the current educational
       | system works, so looking at grades, recommendations, accolades
       | etc. is a waste of time. Instead they say, "Here's a problem,
       | solve it in real-time using just the skills you've actually
       | learned and the knowledge you've actually memorized."
       | 
       | ChatGPT might make it a little worse than it already is, when it
       | comes to ranking students from excellent to mediocre - but it's
       | just a cheaper version of the private tutors that so many
       | students utilize. It might even level the playing field somewhat,
       | as tutors are expensive.
       | 
       | Ultimately, the only real solution, assuming grading is
       | important, is to proctor the students during real-time tests,
       | with all their devices locked away. This makes work much harder
       | for the teachers, particularly with large class sizes, as auto-
       | grading based on online submission of work is so much easier.
        
       | ddmma wrote:
       | I could post my assignments in 5 minutes solving my math homework
       | but I didn't so is always a choice. If you could learn the way
       | you can solve it might be an asset then it's dependent on the
       | final exam and also your understanding of the problems. Sorry but
       | you will not be able to compensate the personal critical thinking
       | with external knowledge systems whatsoever
        
       | easylogin wrote:
       | My take is that it's another tool such as a search engine.
       | Usually we have things such as citations pointing to where data
       | came from as data sourcing is part of the activities included in
       | homework.
       | 
       | Additionally, we often test "offline" with constraints such as no
       | calculator, open book, or closed book with notes, etc. So more
       | regular knowledge checks seem to be a must if we're looking to
       | verify foundational understandings.
        
       | gptadmirer wrote:
       | The more you practice, the less you bleed in battle. The more you
       | prepare for interview and study, the less you compete in the job
       | market. People who study Leetcode have virtually eliminated a lot
       | of competition in the interview market alone. Now what's left is
       | the competition between Leetcode practicioners.
       | 
       | Leetcode is the best investment I've made in my career so far.
       | I've outearned most of my peers (and people more senior than me),
       | easily. Just by doing Leetcode I can eliminate 90% of the
       | competition? Sign me up!
       | 
       | Why not just let the problems sort itself out?
       | 
       | If the students want to cheat, let them cheat. The students who
       | really want to learn will learn. This makes the competition
       | between them better. The bottom of the barrel will continue to be
       | the bottom of the barrel, and the successful ones still become
       | successful.
       | 
       | We really should stop ascribing to "no child left behind"
       | thinking, and instead encouraging competition between them.
       | 
       | Who's gonna work the low paying dirty jobs after all if everyone
       | is smart and capable? It is called "economic ladder" for a
       | reason.
        
         | polygamous_bat wrote:
         | > Who's gonna work the low paying dirty jobs after all if
         | everyone is smart and capable?
         | 
         | Not the son of Jeff Bezos, I can guarantee you that. Your idea
         | would only work if we could completely and clearly separate
         | what we like to call "merit" from "daddy's money".
         | Unfortunately, there are no bulletproof ways of doing so 100%
         | (yet), so we work a little harder to give people with not so
         | much wealth a better shot at life with "no child left behind".
        
           | gptadmirer wrote:
           | Jeff Bezos is an anomaly. For every Jeff Bezos, there are
           | millions of impoverished
           | Indian/Chinese/Ukrainian/Brazilian/Philliphino children that
           | managed to transform their life by studying programming
           | and/or engineering, medicine, etc really hard.
           | 
           | Some children really should be left behind. If they don't
           | want to study hard, they should be left behind. The earlier
           | they realized this in life, the better.
        
         | fma wrote:
         | Was talking to my wife (an educator) about this...simple answer
         | is periodic tests that have essay components and weighs heavily
         | on your grade.
         | 
         | As the grifters get poor scores they'll easily learn that
         | there's a time and a place to use these tools and however they
         | decide to use it they still need to learn. I think the perfect
         | example is when babelfish came online and I'd use it to get an
         | idea for my Spanish essays but not use it copy and paste. There
         | were some kids who copied and pasted blindly and got caught
         | with phrases like "falling in love".
         | 
         | I'm sure translation tools are better now, but leveraging tools
         | for homework is nothing new.
         | 
         | Also, the SATs now have an essay portion, but many schools
         | don't require scores anymore (tbh I feel that's to let legacy
         | students in but that's another story). So those who practice
         | writing would score higher there as well.
        
           | gptadmirer wrote:
           | Yeah, the good students will learn how to use ChatGPT to
           | their advantage without necessarily hampering their learning
           | process. The bad students, well, just leave them to their own
           | ways.
        
         | nynx wrote:
         | I'm not entirely sure what point you're making. Is leatcode not
         | akin to cheating in this context? If doing it pushes you to the
         | top, how would this sort out correctly?
        
           | gptadmirer wrote:
           | Yes, Leetcode pushes you to the top. SWE often divided into
           | two camps, those who really refuse to do leetcode and those
           | who do it because they know it will give them advantage.
           | 
           | Leetcode actually does make you a better engineer, provided
           | you study deeply for it. Ofc it has diminishing returns, but
           | you are already ahead of the game 99% of the time if you are
           | doing Leetcode.
           | 
           | ChatGPT will make some students lazier, and will make some
           | students better. Those students who can use ChatGPT
           | correctly, by taking and observing the output and
           | synthesizing it with their own understanding, will be ahead
           | of the game. Students who are lazy and just copy pasting
           | ChatGPT answers won't, but McDonald's employment is still
           | open for them.
        
             | alfalfasprout wrote:
             | > Leetcode actually does make you a better engineer,
             | provided you study deeply for it
             | 
             | No... it won't. Unless all you do are coding puzzles.
        
       | jerrygenser wrote:
       | Idea: Not exactly sure how it would be implemented, but some sort
       | of virtual proctoring for writing essays that prevents usage of
       | chatgpt or similar tools?
        
         | jupp0r wrote:
         | What happened to preparing kids for life?
        
           | jrumbut wrote:
           | You're always preparing kids for life, the question is what
           | kind of life are you preparing them for.
        
             | jupp0r wrote:
             | A life where they have to work under surveillance to ensure
             | they can't use tools that everybody else uses to improve
             | the quality of their work?
        
               | TchoBeer wrote:
               | seems about right actually.
        
         | cptcobalt wrote:
         | Things like this already exist, like Lockdown browser. This
         | doesn't mean they're good for students or effectively measuring
         | learning.
        
       | jgalt212 wrote:
       | I think the model to handle this is similar to how Computer
       | Algebra Systems have been handled.
       | 
       | Or one could write a good ChatGPT detector.
        
         | pyinstallwoes wrote:
         | But then you could train ChatGPT on the detector
        
           | jupp0r wrote:
           | Then students will become proficient in training AI models.
           | Disaster!
        
             | jhbadger wrote:
             | Or that could be how they learn. It reminds me of
             | professors that let you create a exam "cheat sheet" (a
             | single page in which you were allowed to include any facts
             | or equations you thought might help you on the exam). So we
             | spent hours scouring the textbook and assignments for good
             | stuff to put on the cheat sheet. When it came time for the
             | test, we often found out we rarely had to consult our sheet
             | -- the act of creating it made us absorb the material.
        
             | danuker wrote:
             | Reminds me of stories where people get fired for automating
             | their job away
        
         | bestcoder69 wrote:
         | Make the word problems problematic so ChatGPT refuses to answer
         | them. Bing-bong simple.
        
       | ipsum2 wrote:
       | But chatGPT can't connect to the internet, it said so itself.
       | Sorry, couldn't resist pointing out this garden path title.
        
       | throwaway23597 wrote:
       | These tools aren't going away. And I still think ChatGPT is
       | incapable of writing a full essay with parenthetical citations
       | and such that schools require. Perhaps the best solution would be
       | to teach students to use ChatGPT to help improve their sentence
       | structures and grammar, which I think would raise essay and
       | writing quality across the board without explicitly crossing the
       | line into blatant cheating.
        
       | fab13n wrote:
       | if your test can't tell a smart student apart from a dumb
       | algorithm, the broken part is your test.
        
       | beej71 wrote:
       | I've always felt the real trick was to make students not want to
       | cheat. Or, put another way, convince them to want to learn. You
       | can't convince them all, but the honest students _HATE_ the
       | heavy-handed anti-cheating mechanisms schools are putting in
       | place. They interfere with learning.
       | 
       | Cater to the honest ones, and try, though various means, to
       | convince everyone to not want to cheat. That $80,000 piece of
       | paper isn't worth jack if you can't pass an interview.
       | 
       | There are plenty of ways to cheat your way through school--and
       | being able to get answers off the Internet is nothing new.
       | Frankly, preventing plagiarism in CS has been pretty much a lost
       | battle for years.
        
         | jabthedang wrote:
        
         | knaik94 wrote:
         | Having to push the entire class to learn instead of just the
         | self motivated is a bureaucratic decision. Everyone is paying
         | 80,000$ to be there, it would be problematic for many reasons
         | to ignore those that seem uninterested. For pre-college
         | education, minimum student performance is usually the strongest
         | metric tied to funding. I don't think the current schooling
         | system is set to let everyone find things they will care about
         | learning and not cheating in. It's not realistic to expect no
         | one to want to cheat. Even the brightest student can feel
         | insecure enough in their ability, feel pressured to cheat if
         | they believe those around them will cheat, or just want to get
         | that little bit more ahead of the competition. Cheating is not
         | strictly tied to honesty.
        
       | bearjaws wrote:
       | I had to "AI" proof my technical screening questions for
       | engineers. I was alarmed that about 1/3rd of my questions were
       | easily answered by ChatGPT. Thankfully I've always made about 1/3
       | my questions from their resume so hopefully they can answer that
       | without GPT...
       | 
       | Most of the ones that it did not answer you could tweak the
       | question into ChatGPT to get a correct answer, but would lead to
       | a pretty noticeable delay in answering.
        
         | localhost wrote:
         | After reading your reply here, I did the same with a question
         | that I've calibrated over 100s of in-person interviews with
         | college candidates at Microsoft. This is for both intern and
         | full-time candidates. The goal of the question was to probe for
         | technical competence (my role in the loop). The assumption was
         | that anyone who can remember their data structures and
         | algorithms class can make it through maybe 50% of the question.
         | The question got progressively more challenging so that I could
         | see what happens when the candidate reaches the edges of their
         | knowledge.
         | 
         | It starts very fizz-buzzy and if the candidate makes it to the
         | end, there's a deeper discussion of caching impacts on
         | performance and optimizing algorithms. ChatGPT nailed it. Even
         | when I said things like:
         | 
         | "can you optimize this program further to maximize the
         | utilization of L2 caches in modern CPUs?"
         | 
         | And it did it in <10 minutes. The best candidate I ever saw
         | took 25 minutes and the rest of the candidates took the full
         | allotment of time 45 minutes and none of them got to the
         | discussion of L2 cache optimization. These are candidates from
         | the best schools in the country.
         | 
         | This was really impressive.
        
           | periheli0n wrote:
           | The bottom line being, you don't need to hire engineers
           | anymore, just ChatGPT operators?
        
           | Workaccount2 wrote:
           | We are fucked way sooner than we anticipate. Either
           | transformers level out somewhere right around here to give
           | humans a few years (really a decade would be nice) to
           | prepare, or we are going to slam into an event horizon that's
           | impossible to see the other side of. It's unknowable how
           | humanity will react to be pounced on by commoditized
           | intelligence.
           | 
           | ChatGPT feels like something that is ahead of schedule. Years
           | ahead of schedule.
        
       | imdsm wrote:
       | The problem is education, not ChatGPT. Education needs to evolve,
       | and so far, ChatGPT has been fantastic as an educational device.
       | Compare to my experience studying for a year at Open University
       | where the tutors were unresponsive, unhelpful, and often,
       | unavailable, ChatGPT will (and should) replace them.
        
       ___________________________________________________________________
       (page generated 2022-12-19 23:01 UTC)