[HN Gopher] CheatGPT
___________________________________________________________________
CheatGPT
Author : jicea
Score : 241 points
Date : 2023-02-20 19:35 UTC (3 hours ago)
(HTM) web link (blog.humphd.org)
(TXT) w3m dump (blog.humphd.org)
| deafpolygon wrote:
| The world has evolved, and higher education has not kept up with
| it. IT's time to stop giving trite and simplistic programming
| assignments to your student and make them work for it. Have them
| study set of online documentation and make the test about whether
| they can look up function calls and successfully assemble the
| information required to present a working solution.
| scarface74 wrote:
| I am of two minds about ChatGPT. It's amazingly useful when it
| comes to writing code dealing with my domain since the APIs are
| well known (AWS).
|
| But at the same time it misses subtle nuances that you have to
| have experience with to know when it misses something. In the
| hands of someone who doesn't already know the subject area where
| they already know the foot guns, it can lead you wrong.
| gwright wrote:
| My thoughts also. This seems somewhat analogous to the "uncanny
| valley" problem in graphics animation. So close, but not quite
| there.
|
| https://en.wikipedia.org/wiki/Uncanny_valley
| scarface74 wrote:
| And ChatGPT also caused me to do an amazing self own
|
| https://news.ycombinator.com/item?id=34814257
| Spivak wrote:
| > Apparently, it's not even something students feel they need to
| hide.
|
| Why would you feel the need to hide it? It's a tool, it's not
| like the library, other professors, your friends, Sourcegraph or
| StackOverflow is cheating. Trying to argue why GPT is cheating is
| just going to devolve into "you can have outside help, as long as
| it isn't too good for some arbitrary line.
| Gunnerhead wrote:
| Reading this, I'm glad I didn't have access to ChatGPT and co
| when I was in school. I was lazy and always followed the path of
| least resistance, but I wanted good grades so that meant doing
| the home work by hand after my Google-fu failed.
| [deleted]
| lumb63 wrote:
| I am trying to think about use of LLMs in education as similar to
| calculators. This post reflects how I was taught: students
| couldn't use calculators to solve problems initially. For
| example, no calculators to do addition and subtraction when
| you're learning to add or subtract. But also, probably not when
| learning to multiply or divide. Doing addition and subtraction
| builds skills and intuition for what multiplication and division
| are. The same is true with understanding fractions.
|
| Moving up the math hierarchy to algebra, though, this changes.
| Algebra is at first about the concept of solving equations, and
| the core idea is that "to solve the equation, you can do whatever
| you want to one side, as long as you do it to the other." The
| mechanics of addition or subtraction, for the most part, no
| longer matter. Go ahead and use a calculator to solve obscure
| divisions and multiplications so you can better understand
| algebra (though it feels appropriate to note that a student who
| is good at arithmetic can still outpace a calculator for problems
| that are likely to be used pedagogically, since the numbers are
| easy).
|
| In this example, algebra is to calculus what arithmetic is to
| algebra. A calculus teacher cares little if his students can
| solve equations; he expects they can. They're instead learning
| integrals and derivatives and series, and I doubt a calculus
| teacher would begrudge their students using a calculator to solve
| a difficult equation.
|
| The problems with treating LLMs this way are many. They are not
| calculators. You cannot (trivially, or maybe at all) understand
| how an LLM works. You cannot (trivially) fact check it if it
| spits out an absurd-sounding answer. You cannot limit the LLM to
| what your own, personal abilities to trivially do are. We need AI
| that cites sources, so we can debug when it's wrong. We need
| understandable AI for the same reason, not the obscure black
| boxes we have today. We need AIs that not only can solve our
| problems, but that can also help us to solve our own problems.
| When we have those things, AI will be much more useful for
| education and for widespread use.
| agrippanux wrote:
| What ChatGPT (and its cousins) expose is that the way humans have
| been taught in most schools - memorizing and regurgitating
| information - is now a commodity.
|
| What humans being to the table over ChatGPT is our ability to
| create new links between information, aka creativity. Teaching
| creativity, imo, will require a return to the methods like those
| of Sophocles and his contemporaries. I would rather this author
| be writing about how he is going to re-examine how he teaches
| rather than bemoaning that students can shortcut his current
| approach.
| asdff wrote:
| I went to uni 10 years ago and even then, I can't think of any
| classes that were just memorizing and regurgitating. You'd have
| to memorize fundamental concepts, but come exam time you are
| applying those concepts to new questions, not regurgitating
| anything. In high school a lot of exams were regurgitation, but
| I attribute it to teachers at that level just not having the
| niche experience required to craft clever "apply this theory"
| sort of questions that a domain expert in a university could
| do, and students in highschool are also responsible for a lot
| less theory learning on their own.
| hcks wrote:
| > What humans being to the table over ChatGPT is our ability to
| create new links between information, aka creativity
|
| Maybe not, or not for long. Maybe AGI is coming within 20
| years, and maybe human workers won't have anything to bring to
| it afterwards.
|
| Maybe this is the beginning of the downfall of the value of
| intellectual human workforce.
| psadri wrote:
| Creativity doesn't exist in isolation. In order to be creative,
| and create unexpected connections, one first needs to know a
| lot of seemingly unrelated things.
| waynesonfire wrote:
| Absolutely this. Change the questions you're asking of your
| students. Harder to grade than option A though huh?
| modeless wrote:
| > What humans being to the table over ChatGPT is
|
| Statements like these are premature. ChatGPT is three months
| old! This is a rapidly advancing field. The capabilities of
| these models are very likely to be radically different five
| years from now. Any conclusions drawn now about what value is
| uniquely human and out of reach for AI may be proven wrong
| quickly.
| SV_BubbleTime wrote:
| Agreed!
|
| Anyone not answering with "IDK, but maybe..." is just wasting
| bandwidth.
|
| This Gen1 tech. Most of us are already shocked at how good it
| is, and it won't get worse.
| BarryMilo wrote:
| You seem confident that it won't get worse, but it's only
| as good as its training data. Which is the internet. What
| happens when the internet is filled with generic Gen1
| output? I'm doubtful copy averages can ever lead to
| anything other than increasing mediocrity.
| 101008 wrote:
| I don't think it is the same.
|
| I didn't memoriza Python sintaxy or the name of every function
| or how to do small things. I use Google for that. But I know
| what I need to do in the best possible way (at least that's
| what I am pay for!). Should I set this variable here? Should
| this method be private? Should I design an interface or a
| public class? A dict or a dataclass?
|
| That's what I have to decide as an engineer and where my value
| resides. If ChatGPT only replaces the memorization part, that
| would be OK, but it replaces a lot more of it that requires the
| people using it not to question themselves the things I
| mentioned before.
|
| I had a bet with a friend, he has no knowledge of programming
| and was convinced he could make an online game (!) using only
| ChatGPT. He said one month was going to be enough. Of course, a
| few months have passed already and he is way off. He asks the
| questions that a non-programmer would ask, and what ChatGPT
| gives him back is not usable, not thought for the future, not
| easily modifiable, etc. His code is a Frankestein that won't do
| anything good.
| triyambakam wrote:
| He even used ChatGPT to evaluate the approach of one of his
| student's homework. I don't understand his ignorance.
| dcow wrote:
| I agree though I didn't find the author to be bemoaning
| students. Rather they were writing a "state of the art" piece
| explaining how things are currently happening and leaving it
| open for people to follow with thoughts about how to make
| meaningful changes to assessment style and curriculum in the
| face of ChatGPT.
| underwater wrote:
| Schools haven't been focused on rote learning for eons. I don't
| know where you get that idea from.
| paulcole wrote:
| I grew up in the late 1980s and early 1990s and have a
| phenomenal memory. My dad used to tell me how valuable it would
| be once I grew up and got a job. It's so funny how false that
| ended up being.
|
| It's much more beneficial socially because I can recall jokes
| that fit situations incredibly quickly and get a good laugh.
| switchbak wrote:
| > I would rather this author be writing about how he is going
| to re-examine how he teaches rather than bemoaning that
| students can shortcut his current approach.
|
| Did you read the entire article? What you're asking for is
| exactly how he ends his discussion.
| pessimizer wrote:
| > What humans being to the table over ChatGPT is our ability to
| create new links between information
|
| I'm confident that "creativity" is a combination of:
|
| 1) reproduction errors (when we badly copy things and the wrong
| way to do it leads luckily to a better way to do it), and
|
| 2) systematically or by luck applying established and
| productive models from one context to another, unrelated
| context, and getting a useful result.
|
| Just not a believer in some essential, formless creativity that
| generates something out of nothing.
| piersj225 wrote:
| I've heard one idea of adding weights to words. The idea being if
| I someone hands in a essay with a higher frequency of certain
| words, there is a very good chance its come from a language
| model.
|
| https://www.youtube.com/watch?v=XZJc1p6RE78
|
| I'm not sure this helps with coding though, maybe variable names?
| savolai wrote:
| In earlier versions of Moodle there used to be a peer review
| assignment type, where students evaluated each others'
| submissions and both scores and other students' evaluations were
| scored. Thwn teacher only had to evaluate the reliability of some
| evaluators to infer skill level for everyone, if I recall
| correctly.
|
| My understanding was that this scaled quite well but you wouldn
| have to ask the professors.
|
| This was used in an HCI course at my uni in Tampere, Finland,
| years ago. As a student the experience was very communal and
| enlivening.
| shadowgovt wrote:
| I find myself wondering if the future of programming looks more
| like editing than synthesis.
|
| This is not a bad thing if it's the case; a lot of the job of a
| software engineer is in analysis of code that exists, but so much
| of the pedagogy is in synthesis-from-scratch in a world that is
| already full of billions of lines of code.
| notduncansmith wrote:
| The leadership career track for programmers already closely
| resembles this process (providing natural language prompts for
| ICs and then reviewing/correcting the output), with AI just
| shortening the feedback loop. This has me wondering which
| software stacks will most readily lend themselves to AI-driven-
| development.
| barnabee wrote:
| It would be fantastic if large language models (or any of the
| nascent AI/machine learning tech.) finally kill off both
| assessment in education _and_ copyright /IP protection.
|
| What a wonderful future that would be. We can but hope.
| pfisch wrote:
| Can't you add a series of weird restrictions to the questions
| that make it very difficult for chatgpt to work with.
|
| Must use switch statement, must declare an iteration int on line
| 10.
|
| Must use a for loop to count down in reverse.
|
| If you have like 8 of these rules chatgpt may not be able to
| handle it.
| jimmar wrote:
| I'm a college professor. I'm requiring my students to use ChatGPT
| to aid them on some assignments this semester. Results are mixed
| so far. I agree with the author, "One of the things I'm thinking
| about is that I might need to start teaching students how to use
| GPT/Codex and other LLMs in their [assignments]." One of my top
| students submitted a sub-par assignment because he relied too
| much on ChatGPT and provided little beyond what ChatGPT spat out.
| Another student who previously struggled to write did much better
| when using ChatGPT--it felt like he incorporated ChatGPTs words
| into his own ideas.
| Thorentis wrote:
| > I'm requiring my students to use ChatGPT
|
| God save us from this horrible future.
| brucethemoose2 wrote:
| Another thing to keep in mind is that (unlike stackoverflow,
| wolframalpha and such) this tool is going to evolve dramatically
| over mere months.
|
| I'm not sure universities are structured to deal with such a
| rapid rate of change.
| PostOnce wrote:
| proctored in-person exams, no electronics allowed
|
| exams now count for 100% of your grade
|
| these aren't insurmountably difficult problems for universities
| to solve
| aaplok wrote:
| That's assuming they want to solve it. A lot of
| administrators live in lalaland because it suits them to. It
| is in the interest of society to fail cheaters and frauds
| before they graduate, but it costs the universities money to
| do it properly. So they will only address the problem if it
| comes to public attention.
|
| That's a trend that we've seen over and over, for example
| with the corruption over admission to Ivy league
| universities. In fact chatGPT doesn't really change the
| landscape that much. All it does is democratising contract
| cheating, on which most universities only apply band aids. It
| might end up being a good thing, but only if it does attract
| public attention to the issue.
| visarga wrote:
| > In fact chatGPT doesn't really change the landscape that
| much. All it does is democratising contract cheating
|
| But generative AI will remain a workhorse even after
| graduation.
| IHLayman wrote:
| IRL universities will probably develop a protocol to remove
| access to helper tools for this sort of thing, maybe metal
| detectors at the door and specially tailored computers if
| needed for quizzes. Online universities OTOH are probably going
| to feel the bite of this the hardest. The CKA exam I took a
| while back had draconian measures to try to prevent me from
| cheating, such as taking the test with a webcam on me the whole
| time, with tools in the browser to limit where I surfed, in a
| room that has NO PICTURES on the walls. The tech to provide
| answers is moving faster than the tech to secure a remote room
| from such outside answers. And if in-person proctoring is
| eventually required, I would wager that will be a death-knell
| for online universities, who will have a hard time obtaining
| that physical space on a regular basis while staying marginally
| profitable.
| ticviking wrote:
| Or remote proctoring centers will show up as a business
| opportunity, and the cost of this will be passed on to
| students, who will pay it with financial aid.
| marginalia_nu wrote:
| > This tool is going to evolve dramatically over mere months.
|
| Why do you say this with certainty?
| brucethemoose2 wrote:
| Just assuming the current pace of advancement will continue.
|
| Good text and image synthesis were basically impossible 2
| years ago, and every time I check up on Github some huge new
| innovation has come out.
| marginalia_nu wrote:
| Is there really a basis for such an assumption?
|
| Don't returns tend to diminish with effort, rather than
| increase?
| gchallen wrote:
| I teach CS1. A lot of this post resonated with me.
|
| In particular, I don't think that beginners are well-served by
| relying on AI to complete their assignments. Later on, once
| they've developed some computational thinking abilities, sure.
| Starting out, no.
|
| There's a real dearth of good options available to computer
| science educators today for teaching introductory material
| effectively in the face of all the new and existing ways there
| are for students to cheat. A lot of what people offer up as
| alternatives are unworkable or downright bad ideas:
|
| * Paper exams represent an unrealistic environment, encourage
| terrible programming habits, are a nightmare to grade, and don't
| test student abilities to identify and correct their mistakes--
| which is maybe the most important thing we want to assess.
|
| * Oral exams also don't scale and raise obvious equity issues.
|
| * Beginners have to build basic skills before they are ready to
| work on larger open-ended projects.
|
| We're fortunate at Illinois to have a dedicated computer-based
| testing facility (https://cbtf.illinois.edu/) that we can use to
| allow students to take computer-based assessments in a secure
| proctored environment. This has been a really important support
| for our ability to continue to teach and assess basic programming
| abilities in our large introductory courses. I'm not sure why
| this idea hasn't caught on more, but maybe AI cheating tools will
| help drive broader adoption. (Such facilities are broadly useful
| outside of just computer science courses, and ours is heavily
| scheduled to support courses from all across campus.) Anything
| would be better than people returning en masse to paper
| programming exams.
| mechanical_bear wrote:
| "Oral exams also don't scale and raise obvious equity issues."
|
| Not to be intentionally obtuse, but what are the obvious equity
| issues?
| l33t233372 wrote:
| On top of the other issues discussed, I think that giving
| effective oral exams is hard for the same reasons that
| interviewing is hard. It's a test of the subject's ability to
| quickly and confidently say things that sound roughly
| correct. Some people stumble over their words and cannot
| effectively speak and think completely accurately in the time
| required to give a reasonable answer without an uncomfortable
| pause.
|
| Now, these issues could be mitigated by asking each person
| the exact same questions and taking careful notes of their
| responses, but then you're just back to a bad essay that
| can't be revised, edited, planned, or recollected as easily
| as a real essay.
| blue039 wrote:
| Probably people who are mute or deaf.
|
| That being said, the ultra dense morons who think "oral"
| can't be extended in a special case to simply mean "without
| extra time to think" (which is what is is, mostly) are
| typically not experienced with these exams.
|
| The parent teaches CS1 and is not likely to have been given a
| formal oral examination in their likely short career (these
| are usually reserved for PhD qualifying and special MScs).
| gizmo wrote:
| Exams can be anonymized when graded to reduce teacher bias.
| You can't do that with oral exams. In addition, you can't get
| a second opinion for an oral exam if you suspect you've been
| graded unfairly.
| pfisch wrote:
| Not everyone has the same primary language. Especially at
| universities.
| gchallen wrote:
| Language ability, manner of speaking, physical stature and
| presentation, reputation from previous interactions with
| staff, you name it--all worsened by the fact that many of
| these are probably going to be done by course staff. It's not
| clear to me that any other form of assessment has as much
| potential for subconscious bias.
|
| Orchestras started using privacy screens for auditions for a
| reason. And I'm not familiar with an equivalent for the human
| voice, particularly for hiding halting, labored, or
| elliptical speech--possibly by a non-native speaker--that
| they could straighten out on the page.
| asdff wrote:
| For CS1 you probably don't need much compute power for
| assignments. Schools are already well funded enough to
| sometimes offer freshman ipad pros. You could offer them a
| raspberry pi or some extremely cheap, low powered PC, make them
| return it after the semester if you want to save them money.
| You can firewall it from the open internet, and have students
| turn in their code from this device alone to a university
| server. They can still cheat, sure, but to do it would mean
| transcribing code from one device to another by hand, which is
| enough friction and a timesink where fewer students would
| consider it versus actually paying attention in class.
| mechanical_bear wrote:
| "enough friction and a timesink"
|
| Have you taught students before? Many will spend inordinate
| amounts of time to not learn the material. Often times it
| seems there is no friction too great if it allows one to not
| think too hard.
| asdff wrote:
| Yes, and I've been one, and know that time is finite and
| you have more than one class that demands work on a
| deadline, along with all the other fun stuff college has
| that pulls you away from your studies, and the not so fun
| stuff like part time employment. If you leave the system as
| it is today, its easy to copy and paste code. If you do
| something akin to what I proposed, you've eliminated copy
| and paste, and made cheating into a literal chore that
| isn't saving you nearly as much time as it would have
| otherwise, and fewer students will end up cheating. You'd
| be surprised at how many students I knew in undergrad who
| would be broke and would still pay like $400 a semester on
| textbooks because the friction of doing hackery things like
| photocopying chapters of the book in the library, or
| googling "my math book 2nd ed. pdf" and finding the library
| genesis result was just too much.
|
| Of course the death blow for this sort of cheating is the
| exam, which you weight quite a bit more than the homework.
| A student who just copy and pastes code will still fail the
| class, since they can't use chatgpt in the lecture hall
| during exam time.
| jefftk wrote:
| And then someone in the dorm makes a keystroke injector,
| and everyone goes back to typing their code on their own
| computer.
|
| Ex: https://null-byte.wonderhowto.com/how-to/make-your-
| own-bad-u...
| ssharp wrote:
| I think there' an initial reaction of something like "how dare
| they!?!?"
|
| However, AI the type of tool that going to level up mankind's
| capabilities to the point that curriculums will need to adjust to
| fit those new capabilities. Certainly this has happened dozens of
| time in a field like Computer Science, where curriculum in 2023
| is radically different than it was in the 60's and 70's.
|
| This new rise in AI might be amongst the most disruptive forces
| ever in many fields, including academics, but at some point, you
| have to factor AI in as integral part of our day-to-day work and
| life and factor that into education.
|
| This will be difficult, especially finding the line between what
| is fundamental and what is not, but it's not like this hasn't
| happened before -- e.g. the calculator didn't eliminate the need
| to learn basic arithmetic.
| iLoveOncall wrote:
| Cheating in academia isn't a problem, because the only people
| students are cheating is themselves.
|
| I've seen plenty of cheaters, they were the worst students, and
| despite cheating couldn't graduate or couldn't find a job after
| graduating.
|
| If some moron is stupid enough to cheat when he's paying 50K a
| year, let them.
| hiq wrote:
| > I've seen plenty of cheaters, they were the worst students
|
| In other words, you haven't seen the "cheaters" who were among
| the best students.
|
| "cheaters" in quotes because it's not clear to me that people
| using freely available resources when doing homework are really
| cheaters. If an instructor wants to do a closed-book exam, they
| can do just that.
| iLoveOncall wrote:
| > In other words, you haven't seen the "cheaters" who were
| among the best students.
|
| If they were able to fool everyone to the point of being
| considered good students, it means they weren't cheaters,
| just that they had a different approach to problems than
| others (which is kinda what you say after).
| novok wrote:
| The problem with widespread cheating even if you don't cheat
| yourself is it essentially reduces the credit rating of your
| $50k program and has knock on effects elsewhere, like worse
| interview loops. And if the 'cheater's workload' becomes the
| norm, you the non-cheater can be literally failed out of
| programs and scholarships because you didn't keep up.
| shadowgovt wrote:
| At what point does "not keeping up with the cheaters" become
| "There is a more efficient way to do the task and this
| student's grade reflects they are choosing the inefficient
| approach?"
|
| I'm reminded of the stories of employees getting busted
| because they were assigned a job so trivially automatable
| they either _did_ automate it or they used some find-labor
| service to delegate it for a fraction of the cost out of
| their own pockets, who are then accused by the company of
| "not working."
| nickfromseattle wrote:
| According to the Department of Education, 54% of Americans have
| below a 6th grade reading level. [0]
|
| Everyone thought that the kids who started using mobile devices
| as babies would become computer savants, but it turns out the
| kids these days don't understand what a file system is. [1]
|
| What will ChatGPT do to our youth?
|
| [0]
| https://en.wikipedia.org/wiki/Literacy_in_the_United_States#....
|
| [1] https://www.theverge.com/22684730/students-file-folder-
| direc...
| TchoBeer wrote:
| I wonder if that didn't pan out in part because of the death of
| personal general computing devices. It wasn't "kids who started
| using mobile devices as babies" it was "kids who would be
| fluidly navigating a PC by age 5", which didn't pan out.
| delfinom wrote:
| >What will ChatGPT do to our youth?
|
| Nothing bad, just the country as a whole is destined for future
| mediocrity.
| carabiner wrote:
| Soon you will have to use version control and submit version
| history with homework. Though maybe chatgpt can generate that,
| too.
| shagie wrote:
| Given the amount of `git add . ; git commit -m "everything"`
| that I've seen with professional developers, I am not confident
| that you'd get anything better from students.
| sixothree wrote:
| Entire feature tickets with one single commit drive me batty.
| ModernMech wrote:
| I'm already doing this in my classes and have been for a while.
| We're transitioning more CS classes over to this mode of
| teaching soon. We've got a gitlab instance for the department,
| and all students have an account. Works great!
| napolux wrote:
| Did a try with ChatGPT asking to write a react hook doing
| something really basic.
|
| Code was ok, as expected.
|
| I asked to write tests for it, code looked good but it won't run
| for some act() errors.
|
| Then it was mocking network calls...
|
| After trying for a while I had to write tests myself
| sharperguy wrote:
| I tend to find it works best as a tool for generating situation
| specific examples rather than writing you entire code for you.
|
| Like when every stack overflow answer you find just doesn't fit
| because of one major difference in your situation, chatgpt
| often has you covered. Then you take the example code it gives
| you and adapt it to your actual codebase, testing to see if it
| works as expected as you go.
| napolux wrote:
| yep, i had the very same "stackoverflow dejavu" feeling
| singularity2001 wrote:
| Maybe take the approach many physics classes embrace (in germany)
| : All tools are allowed in tests: calculators, books, as long as
| the correct answer is derived with "correct derivation path".
|
| If chatgpt makes some problems too easy maybe it's a good thing
| because we can raise the bar. Find problems that require true
| understanding beyond auto complete / copy pasting.
| asmor wrote:
| GitHub Copilot quite often delivers comments alongside the code
| for a prompt.
| vlunkr wrote:
| This isn't too different from requiring students to write
| essays. No one actually needs another essay about Shakespeare
| or whatever, and ChatGPT could do it for you, but the point of
| the exercise is to learn to write. It's the process, not the
| product that matters.
| user432678 wrote:
| Everyone's ChatGPT gangsta until their ChatGPT-trained
| doctors start giving fatally wrong diagnosis. 100% agree with
| whole comment. I interview cheaters time to time, their
| excuse of not learning stuff is exactly that. However, in the
| end they've cheated themselves more than the system.
| EVa5I7bHFq9mnYK wrote:
| With the rate they give fatally wrong diagnoses and
| treatment plans today, I wonder if ChatGPT could be
| actually better for patients.
| tsol wrote:
| ? ChatGPT doesn't diagnose and isn't a doctor.
| alexpetralia wrote:
| Almost certainly the bar for problems that ChatGPT cannot solve
| is far, far higher than a student who is just learning computer
| science.
| endisneigh wrote:
| without knowing with a high confidence what is LLM generated or
| not, it seems pointless
| dariosalvi78 wrote:
| I teach to first year students. We have graded programming
| assignments they can do by themselves, where they can cheat in
| all sorts of ways, but I honestly don't care, it's their
| responsibility to learn. Then we have theoretical exams, where
| there are also questions about programming (but we don't check if
| the code has perfect syntax). In one course the exams are oral:
| excellent tool for assessing students, but very time consuming,
| in another it's written: more efficient, but more shallow.
|
| I like the idea about teaching how to use AI, but it has to be a
| tool among others.
| hcks wrote:
| It's okay, it's now clear that programming won't be done by
| humans within 10 years.
| wpietri wrote:
| This is definitely going to require changes in interview
| practices for a lot of places. For those looking for an
| alternative, may I suggest pair programming?
|
| I've been doing pairing interviews for years. These days I have a
| standardized, practical problem, something that's reasonably like
| the work. (E.g., let's use APIs A and B to build features X, Y,
| and Z.) I let them pick their preferred language and tooling, so
| that I can see them at their best. And then we spend a fixed
| period diving in on the problem, with me getting them started,
| answering questions, and getting them unstuck.
|
| I like this because not only do I get to see code, I get to see
| how they think about code, how they deal with problems, and how
| they collaborate. They get to spend time building things, not
| doing mensa puzzles, posturing, or other not-very-like-the-work
| things. And they can't bluff their way through, and it's pretty
| hard to cheat.
| sgu999 wrote:
| If some places need to change their interview practices, I
| think it means they've been doing it wrong all this time. You
| ask them to write some code, then to explain and justify it.
| Whether or not it has been mostly written by an LLM really
| shouldn't matter... (leaving aside edge cases preventing to use
| some tools)
| kube-system wrote:
| The emergence of this problem is exactly why I don't like
| giving people programming exercises that are overly explicit.
|
| Most coding tests just tell people what to write and then have
| them write it. Real world problems are more complicated.
| Instead, tell your candidates what your problem is and then ask
| them for a solution. Let them write their own requirements.
| It's a lot harder for language models (and developers with poor
| problem solving skills) to solve these kinds of questions as
| well.
| wpietri wrote:
| Exactly. At points during a pairing interview when I get asks
| for more details on requirements, I'll make sure to reply
| with something like, "Which do you think is better for the
| user?" It turns out most developers have a pretty good sense
| for this, even if they're not used to being able to use it.
| filmgirlcw wrote:
| I really like this blog post, I think it fairly describes some of
| the challenges teachers are going to face with this technology,
| while also admitting that this tech is inevitable.
|
| > In my opinion, the students learning to program do not benefit
| from AI helping to "remove the drudgery" of programming. At some
| point, you have to learn to program. You can't get there by
| avoiding the inevitable struggles of learning-to-program.
|
| I don't disagree with this at all (at least for where we are now,
| in ten years, it might not matter as much), and I don't want to
| be glib, but I do think the answer is to "teach students to
| program." Don't rely on rote assignments that you're checking
| with an auto-grader (not saying this professor does that, but a
| lot do) and cookie-cutter materials; actually teach them to
| program.
|
| And yes, LLMs will almost certainly mean that some students will
| cheat their way out of their assignments. But just like most
| cheaters who cheat on things they don't fundamentally understand
| (which is different from people who cheat to hurry up and get
| through an exercise they could do in their sleep but don't want
| to waste the energy doing), it will catch up to them when they
| have to do something that is not part of the rote assignment.
|
| Or maybe, adjust how you test/assign homework. That isn't to
| imply that that won't take more work, but if your concern
| actually is that students aren't learning and are just
| successfully copying and pasting, the testing/grading is the
| problem.
|
| --
|
| In high school (and college), I was a top student in math and in
| English. But I hated doing homework. In one math class in high
| school, although I got near perfect scores on all of my math
| exams, the teacher still gave me a C because 20% of my grade was
| "homework" that I didn't do (I had already taken the same class a
| year earlier at another school -- I didn't need to do the
| homework. My math tutor my mom got me out of fear of my bad grade
| taught me Calculus and Fortran instead of trying to get me to do
| the useless homework). This taught me nothing and frankly, soured
| me on taking more advanced math classes that were all taught by
| this same teacher.
|
| In contrast, I had an English teacher who would assign vocabulary
| homework. Basic, "write a sentence with each word" shit. Again, a
| total waste of time for me. So I worked out a deal with him, let
| me just orally tell you what each word means, saving us both time
| and energy. He took the additional step of assigning me/grading
| me on different criteria than the rest of the class for essays
| and the like.
|
| Which class do you think I learned more from? Which teacher
| actually cared about whether I knew/understood the material,
| versus what checkboxes I needed to follow to show "completion."
|
| If the goal is to teach students to understand what they are
| doing, then do that. Don't get obsessed with trying to stop the
| inevitable few from cheating, or become overly focused on only
| having one way to measure comprehension.
| paxys wrote:
| > Apparently, it's not even something students feel they need to
| hide.
|
| Which is good! If a bit of work is trivially accomplished by a
| machine, we should take it for granted and move on to the next
| layer of complexity. I have always maintained that teachers
| complaining about students cheating at homework assignments with
| AI need to instead work on providing better homework.
| yamtaddle wrote:
| Creating the paper has never been the point of these things.
| The paper has no value at all, to anyone, as soon as the grade
| is issued.
| uneekname wrote:
| That may be the case for some students, but there are a
| number of papers I wrote in college that I am proud of and
| revisit regularly.
| SkyBelow wrote:
| Should we? Basic arithmetic has long since been solved, but
| I've met plenty of people who struggle at higher level math
| because they haven't mastered enough basic arithmetic. Solving
| complex problems will often involve many much simpler problems
| that must be solved as well. The time to offload this to
| another system to solve it for you is immensely more expensive
| than being able to solve it mentally, meaning that to solve the
| complex problem ends up being far more expensive as well.
| Eventually students will reach problems whose price they can no
| longer afford.
|
| It is related to the reason we teach concepts starting with
| simple, small, easy to solve problems before building up. If I
| want to teach a student how to solve the limit of x*sin(1/x) as
| x approaches 0, I need them to understand quite a bit of math
| to even know what the problem is asking.
| misnome wrote:
| You seem to have misunderstood the purpose of homework.
| jehb wrote:
| To be fair, I'm not sure there's consensus around the purpose
| of homework.
| thfuran wrote:
| The point of school work isn't to complete schoolwork, it's to
| learn.
| hutzlibu wrote:
| If the point of school work is to prepare for life - then you
| should mainly learn, how to get a job done.
|
| And if they want to teach special skills, like writing essays
| without computerhelp - then you can test that onsite.
| rpearl wrote:
| The point of writing an essay is not to learn how to
| produce an essay. It's to learn analytical thinking,
| research, and argument skills.
| humanizersequel wrote:
| Knowing how to produce an essay is exactly the same as
| "analytical thinking, research, and argument skills" with
| the added challenge of making it legible to a reader --
| which is what makes those skills useful.
| Spivak wrote:
| I suppose, but having written plenty of essays as an
| adult I can say with complete certainty that nothing I
| learned from my 5 paragraph days was of any use. No one,
| not you, not your teacher, not any real life audience for
| any topic you would be presenting on or publishing for,
| wants to read anything remotely close to what you're
| forced to write in school.
| hutzlibu wrote:
| Well, as far as I know, that is an special case of an
| essay. And this you can test onsite.
|
| The kind of essays I had to write in schools were more
| about nice sounding words and less the content. CheatGPT
| can produce nice sounding words, so I am hoping that the
| focus will move towards rewarding content.
| criddell wrote:
| It isn't clear to me if you disagree with the GP.
| [deleted]
| jasmer wrote:
| ???
|
| By this reasonning nobody should ever learn anything, because
| it's all 'trivially doable' by machine.
|
| Like 'addition' and 'subtraction'.
|
| So let's gaslight those dumb teachers by saying they should
| make up 'better' homework assignments?
| [deleted]
| thfuran wrote:
| Intractable and unsolved problems only!
| Spivak wrote:
| Or just novel applications of the things you learn in
| class.
|
| "Congratulations! You leaned depth-first-search! ^award
| noises^ Below is the algorithm for reference because
| memorizing it just for this test is silly. You're working
| on a real time mapping application called Maply. Locations
| are represented as nodes and all direct routes between any
| two nodes are represented by directed weighted edges."
|
| a) Write a function that takes a start node, an end node,
| and a maximum distance to travel and return the shortest
| path between the two.
|
| b) Your boss said that users need to be able to add stops
| along their journey. Write a function that takes the final
| path you computed in part a and the new node for the added
| stop and compute the amended path changing as few of the
| original legs of the trip as possible (don't want to
| disorient your users).
|
| c) Now your boss is saying you need to handle the situation
| where users make mistakes. Use the function you wrote in
| part b to implement this feature.
| l33t233372 wrote:
| For the record I copy and pasted your comment in it's
| entirety to chatGPT and it answered each part flawlessly
| with well written, commented code, and a plain
| explanation of the code.
| ramesh31 wrote:
| >By this reasonning nobody should ever learn anything,
| because it's all 'trivially doable' by machine.
|
| >Like 'addition' and 'subtraction'.
|
| A better analogy would be low level coding. I don't know (or
| care) how my processor calculates `var f = 3+2` at the
| register level. And being able to ignore that allows me to
| focus on higher level concerns.
| jasmer wrote:
| I see what you mean, but it's not really a better analogy.
|
| We need to learn how to do addition at some point, so we
| can't have ChatGPT do that.
|
| We need to learn how 'registers' work, so we can't have
| ChatGPT do that.
|
| We need to learn basic algorithms work, so we can't have
| ChatGPT do that.
|
| AKA - almost whatever is being assigned as homework, is the
| 'thing to be learned' and it's ridiculous to suggest that
| ChatGPT do that, and doubly so to gaslight teachers.
| filchermcurr wrote:
| The thing is, what applies to you doesn't necessary apply
| to everybody. _Somebody_ has to understand low level
| coding. Somebody has to be introduced to it without
| necessarily knowing going in that it will be a career path.
| Somebody will need to write compilers and reverse engineer
| and design CPUs. Just because a skill isn 't valuable to
| you or those you know doesn't mean it isn't valuable to
| others, especially those who don't know enough yet to know
| that it might interest them.
| sixothree wrote:
| One of the required classes for CS degree was Assembly
| Language. Nobody taking or teaching the class pretended
| there would be a great need for this language in a job
| setting. But that wasn't the point of this class.
| novok wrote:
| The failure mode of things like ChatGPT is it can make wrong
| answers confidently, subtly, and if you don't have the skill to
| audit what is wrong with the answer, then it can be fairly
| catastrophic.
|
| So instead of making questions generative, you make them audits /
| debugging type ones.
|
| Use ChatGPT to generate a result after several iterations that is
| wrong and then ask them what is wrong with the result. Since
| ChatGPT generated the wrong result, they will still need to debug
| it even if they try to do it themselves in ChatGPT, because daddy
| ChatGPT is not going to give them the right answer. You can even
| show them the chat transcript in the question. And often
| debugging is a harder skill than creating in some ways.
|
| It's not a %100 solution, and we don't know how long it will take
| to not be relevant, but it is something you can do today.
|
| That and inquisitive back and forth oral questioning maybe as
| part of the testing process.
| SV_BubbleTime wrote:
| I think it's an interesting point about making the questions
| audit or debugging.
|
| But, also a mistake to hark on the often humorous fact at how
| confidently wrong our Gen1 AI can be. That will go down over
| time. Imagine a time when AI is making good programming choices
| and correcting itself when it's wrong. Imagine Gen4 AIs.
|
| Both of these things tie together. If we need to detect
| cheating now, prove you can debug. When AIs get better, prove
| you can debug. It'll be the same skill either way.
| novok wrote:
| That is why I said "It's not a %100 solution, and we don't
| know how long it will take to not be relevant, but it is
| something you can do today."
| stillsleepy wrote:
| the other week i gave chatgpt a simple multiplication problem
| that it got wrong. very simple problem like 86 * 0.0007 or
| something. but ive been working with chatgpt for 4-5 weeks now
| and that wrong answer doesnt make up for all the "good answers"
| that are usually not perfect. like one day i needed to COALESCE
| in mysql. i didnt know that, but chatgpt did. theres a few
| times i would have written a function the complicated way when
| gpt gave me a much simpler nicer to read way. i think the tool
| is great and tbh i dont like copilot in comparison and turned
| it off.
|
| i dont think chatgpt can be a 100% solution without several
| years of nerfing.
| Thorentis wrote:
| Multiplication problems are not language problems. There is
| no data in the training set where there is a likely
| probability of the next token in the "86 * 0.0007 =" sentence
| being correct.
|
| People need to stop treating ChatGPT as a computation engine.
| It is not wolfram alpha. It is not google. It is fancy
| autocomplete trained on a large subset of the internet.
| brycedriesenga wrote:
| You can use this for that:
| https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain
|
| (GPT + WolframAlpha + Whisper)
| resource0x wrote:
| I tested several AI chats, and none of them could
| correctly answer the question: "what is heavier, 1
| kilogram of nails or 1 kilogram of feathers?". Does this
| one know? (I don't have openai key, can't test it
| myself).
| josh2600 wrote:
| For now.
| HDThoreaun wrote:
| Why would you use chatGPT as a calculator? Use a calculator
| for that.
| cvhashim04 wrote:
| We have calculators, wolfram, google etc yet math exams and hw
| assignments are still administered. I think the approach to
| teaching and hw assignments especially in CS programs will have
| to change.
| resource0x wrote:
| The professor could just ask the student: how did you come up
| with this solution? What was your thinking?
|
| To prepare for this question, the student has to think really
| hard both about the problem and about the solution. While doing
| so, we _learn_ a lot - maybe more than we would learn otherwise.
| unreal37 wrote:
| Asking ChatGPT to explain the reason it gave an answer is a
| thing.
| resource0x wrote:
| My bet is that it won't be able to provide a human-level
| explanation.
| flangola7 wrote:
| Maybe not today. Tomorrow is another question.
| m3kw9 wrote:
| Before gpt3 you can still hire people to cheat, it's higher
| barrier to entry, but now literally everyone can do it instantly
| and free. Take home assignments must be de-emphasized and focus
| in person tests
| stevage wrote:
| One possible temporary solution is to use ChatGPT to generate the
| solution to each question, and only include questions where its
| answer is incorrect.
|
| Then at worst the challenge for the student is debugging ChatGPT
| code, which still has merit.
| throw310822 wrote:
| I don't understand how most of the comments here seem to be along
| the lines of "these interview questions are useless now" or "we
| need to rethink education, it hasn't kept up". These all seem
| absurdly myopic to me.
|
| What we're seeing is the first instance, still very limited and
| imperfect, of AGI. This is not going to make some interview
| questions obsolete, or give students more tools to cheat with
| their homework. It is effectively proving that acquiring
| knowledge and professional skills is becoming useless, for good.
| In a few years (3, 5, 10 at most) this is going to defeat the
| entire purpose of hiring, and therefore of completing most forms
| of professional education. At the current rate of progress, most
| intellectual professions will be obsolete before today's sixth-
| graders are ready to enter the job market.
|
| I can't even picture a functional world where humans are cut out
| of most professions that don't involve manual work; where any
| amount of acquired knowledge and skills will be surpassed by
| machines that can produce better results at a thousandth of the
| cost in a thousandth of the time a human can. And even if such a
| world can function, I can't imagine a smooth transition to that
| world from its current state.
|
| I'm worried, does it show? :)
| haolez wrote:
| I don't know if this is an AGI-like experiment, because LLM are
| trained on human knowledge. I'd expect that real AGI wouldn't
| need such a thing and would improve on its own. That's the
| moment where we become obsolete.
| mahathu wrote:
| Thank god for whiteboard coding interviews!
| braingenious wrote:
| >It's been breaking my understanding of what is and isn't
| possible, and forcing me to think about new ways to solve
| technical problems.
|
| followed up by
|
| >However, I'm not sure how to understand it as part of student
| learning.
|
| Is super funny
| indus wrote:
| Wasn't this day inevitable when colleges killed paper and oral
| exams?
| neilv wrote:
| I see a ton of people trying to justify their use of
| Copilot/ChatGPT (or rushing a startup to cash in on LLMs).
|
| Maybe that conflict of interest is why there's very little talk
| of it being based on plagiarizing and license violation of open
| source code on which the model was trained.
|
| We just suffered through a couple decades of almost every company
| in our field selling out users' privacy, one way or another. And
| years of shamelessly obvious crypto/blockchain scams. So I guess
| it'd be surprising if our field _didn 't_ greedily snap up the
| next unethical opportunity.
| pixl97 wrote:
| Right, any humans that reads open source code ever should also
| be forced to submit any code they write to ensure they've not
| mentally performed copyright violations.
|
| I don't give two shits if whatever current expensive GPT is
| dumping out code 'very similar' to open source code today. And
| you'd be chopping off your own nose if you did too. Thinking
| these models will remain as expensive to run in the future
| means that at the time you or I could run 'LibreGPT' on our own
| hardware, we'd be scared as hell to even write the code because
| any use of it could get you sued into oblivion.
|
| Burn copyright to the ground.
| robocat wrote:
| That's why we have clean room development.
|
| For patents it is to avoid triggering triple damages.
|
| For copyright it is to help avoid allegations of plagiarism.
|
| > Burn copyright to the ground.
|
| Burn patents to the ground! Copyright has some use, but the
| Disney changes have somewhat ruined its purpose.
| [deleted]
| otterpro wrote:
| If I were a teacher, I'd give written exams for coding, as we
| used to have back in the 80's when we didn't have enough
| dedicated computers for everyone in the class, and most of kids
| didn't have a home computer to work with. So instead, we'd
| implement simple algorithms on paper for in-class
| quiz/tests/homeworks. I remember in high school, we had an exam,
| and one of the question was to write a for loop counting from 1
| to 40. It might be harder to grade, but teacher could usually
| tell if the code looked good or not. On one occassion, I thought
| I had written the code correctly but teacher didn't, so I had to
| demonstrate the code on the computer for the code that I had
| written in the test and show the teacher that the code ran as
| expected.
|
| It's kinda like whiteboard coding, but it's a lot less stressful
| since the test is mostly written on paper without the scrutiny of
| an interviewer. Obviously, we can't create a react web app, but
| at least students can demonstrate fundamentals, which are what we
| should be teaching in the first place.
|
| Another solution would be to create a new programming language
| that doesn't exist yet for the students to write code for their
| assignments or perhaps use an obscure language that AI/chatgpt
| isn't aware of... perhaps we can go back to Ada or Modula 3.
| travisgriggs wrote:
| > If I were a teacher...
|
| You'd be wondering how the heck you get out of a profession
| where pay is stagnant, the backlog of problems has been growing
| for a long time, and the minimization of resources to deal with
| has been in the wrong direction for a long time. You'd join the
| league of "you can never go wrong hiring another administrator"
| or exit for some related parlayable field in industry and throw
| your hands in the air with a "somebody else's unsolvable
| problem" disgruntled attitude.
| asdff wrote:
| CS programs seem like a different beast. At my alma mater
| they are always growing the cs faculty pool. At certain
| schools you can really work with some giants in cs
| departments. People who developed the algorithms you are
| using today with a piece of chalk. A lot of the new hire
| faculty seem to come from industry too, secured their bag
| already, and are jaded from working in industry. Sometimes
| they come from those industry "skunkworks" R&D type of teams
| where you'd asume everything should have been just peachy for
| them, but clearly not if they are looking for academic
| positions with their stacked resume.
| austinl wrote:
| I graduated in 2015, and I had several classes where exams
| involved writing pseudocode by hand--particularly the
| algorithms-focused classes. I think universities will continue
| to do this.
| papaver wrote:
| let them cheat. every assignment let them know that in the end
| they are only hurting themselves. the ones that want to learn,
| won't cheat. simple as that. and that ones that cheat and
| leverage it, great. hope that works out in the future at your
| next destination. all you can ever do is present the facts. there
| is no point wasting time trying to catch this.
| spiderxxxx wrote:
| >hope that works out in the future at your next destination.
|
| That's the thing - a lot of code is generated with "Github
| Copilot" so it isn't considered "cheating" in the real world.
| They need to learn how to properly use the tools available to
| them. They'll be harmed by forcing them not to use this tech,
| so it makes sense to teach them how to use it better.
| indigodaddy wrote:
| Sure but if you allow it aren't you facilitating a potentially
| unfair (or unreliable) baseline?
| wayeq wrote:
| "In my opinion, the students learning to program do not benefit
| from AI helping to "remove the drudgery" of programming. At some
| point, you have to learn to program. You can't get there by
| avoiding the inevitable struggles of learning-to-program."
|
| Couldn't you say the same about how compilers 'remove the
| drudgery' of writing machine code? Or is that a bad analogy?
| Provided AI eventually gets good enough in its code generation,
| maybe 'programming' is moving up another layer of abstraction.
| mahoho wrote:
| Abstraction works because you are able to treat an abstraction
| as a black box and concern yourself only with its input and
| output. A segment of code written by an LLM is qualitatively a
| very different thing; it's more like an open box of crap that
| you have to inspect and put together yourself, which requires
| knowledge of the contents, which requires experiencing the
| drudgery.
| shadowgovt wrote:
| I rarely encounter abstractions in the wild that are as
| nicely "sealed" as the definition implies. Looking at an open
| box of crap and understanding why it's doing something other
| than what the author (or you) intend is a valuable skill.
|
| (No idea if this new model of "Ask ChatGPT or Copilot to
| synthesize a solution and then tune that solution" provides a
| solid opportunity to improve that skill yet, however).
| sublinear wrote:
| When I was a freshman in college the professor would live code
| various data structures and algorithms on the projector and ask
| the students to follow along. Each time he did this it was subtly
| unique.
|
| It was required that you continue to use this same base code for
| the assignments plus your edits to complete it. This made it
| obvious who didn't attend the lectures and who they copied from.
| Assignments were graded by your peers and did not affect the
| final grade unless you didn't do them at all. Quizzes were not
| code, but proofs written in your own informal style. Tests were a
| mix of both proofs and code on paper with no notes or devices
| allowed.
|
| I don't see how ChatGPT threatens this.
| rippercushions wrote:
| Good for you, but that's not how the vast majority of CS
| courses operate.
|
| Also, coding on paper with no access to devices is both
| terrible and has next to nothing to do with how CS grads will
| actually work.
| goatlover wrote:
| It is a computer science degree, with an emphasis on the
| science part, and a note that CS is not the same as coding
| computers. It's the science of computing, not a tech bootcamp
| to get you quickly up to speed in the latest hot language and
| framework.
| sublinear wrote:
| A CS degree is not a "bootcamp" or any other form of
| vocational training. Also, there are criteria for the school
| to maintain accreditations, you know.
| sebzim4500 wrote:
| Then why are people paying hundreds of thousands of dollars
| for them?
| sublinear wrote:
| To get an actual education instead of merely imitating
| whatever is trending on hacker news.
| cvhashim04 wrote:
| I think most just want a decent job to pay bills after.
| The rest love the field, the theory of cs, and may or may
| not continue into academics or research.
| ttul wrote:
| The best counter to ChatGPT cheating that I've heard of is to get
| students to orally defend their assignments in class. No computer
| is going to come to your rescue when you have to explain why you
| wrote the code you wrote.
|
| Encourage the use of ChatGPT and other tools. Put the emphasis on
| understanding what the code does and why. The tool may help to
| explain this to the student, but if so, that's fantastic. No need
| to worry. Learning has occurred.
| another_story wrote:
| They address the difficulties of doing this in the post.
| tommica wrote:
| Yeah, this is a good one - does it matter if you wrote the
| code, or a computer did? If you can explain how it works, and
| why it is like that that should mean that you understand it and
| have learned the important parts.
| vsareto wrote:
| You _could_ ask it to explain line-by-line for you and
| memorize that enough to give an oral presentation. You may
| not get the best grade, but it might be enough for a passing
| grade.
| singularity2001 wrote:
| If you memorize it that's half way to understanding, so
| fair use
| kerpotgh wrote:
| There isn't any time for this in a 500 person public university
| classroom.
| lostmsu wrote:
| One can delegate that to ChatGPT.
| hgomersall wrote:
| When I was at uni, there was no marking of assignments. The
| "examples papers" were basically your opportunity to
| understand the course before a supervision when you could
| discuss them. If you didn't do the examples, the supervisor
| didn't really care (typically a grad student that had enough
| of their own worries). My point is, if I'd cheated, it would
| have ultimately been my problem when it came to sitting the
| exam.
| captainmuon wrote:
| Excercises should not be about evaluating and judging. They
| should be about learning. If a student uses an AI, or copies
| someone else's work, it is to their own detriment. (Exams are a
| whole different question.)
|
| Even before AI, you could read a book and copy-paste the
| excercises, or just skip them, but if you wanted the full
| learning benefit you would type them out. I think we will have to
| focus more on _teaching how to learn_. This situation is nothing
| completely new. Even though you have power tools in woodworking,
| apprentices learn the basic techniques with hand tools (AFAIK).
| highspeedbus wrote:
| Using ChatGPT et al. to cheat in assignments should be viewed as
| a practice as low and shameful as ordinary plagiarism.
|
| I wish the best to educators, but they shouldn't lose any more
| sleep over it. The game is over.
|
| Unfortunately, the future tends towards distrust to pathological
| levels. Teaching Ethics will be of central importance as never
| before.
| gravitronic wrote:
| Perhaps to teach programming at the university level we will need
| to better mimic real world software development. I always found
| the 300-line assignments to be a poor practice for the real job
| where you have 100k LOC legacy systems.
|
| We should work backwards from "what skills should students
| learn".
|
| Maybe we need to make larger assignments that need to pass larger
| acceptance tests. Students who chose to use chat gpt will also
| need to learn the skills necessary to debug its output.
| Der_Einzige wrote:
| "Cheating" is good. School teachers and professors should not be
| the arbiters of how much money you earn later in life, but they
| are based on their subjective assessment of you (via grades).
|
| Many do not realize the intensely adversarial relationship one is
| in with a teacher. You are not there to learn. You are there to
| get an A. In the case of social sciences, you support the ideas
| your professor espouses. In the case of CS, you do whatever it
| takes to get your code running as the assignment specifies.
|
| Anything else is idealism which will harm your future earning
| potential. It's insulting to ask kids to surrender their future
| earning potential due to "ethics" in an academic world where ever
| top tier conferences and journals are filled with unreproducible
| BS science.
| DFHippie wrote:
| The people you're cheating are not your professors but your
| peers. For better or for worse, grades are used to rank people
| and mete out further opportunities and benefits. The professors
| already have jobs.
| [deleted]
| shadowgovt wrote:
| Let's unbox that a bit. In what sense are one's peers cheated
| when an individual cheats?
|
| If the course were going to have 20% A-level students and now
| has 50% A-level students, what has been taken from the
| initial 20%? They were still going to be able to put on their
| resume "4.0 GPA from Suchandsuch University."
| grrdotcloud wrote:
| Technology is amoral. The ChatGPT reminds me of a nailgun vs a
| hammer.
|
| Once you know how to properly use the tool, the benefits of
| positive work become evident while the human components remain.
|
| We still need the nailgunner to show up on time, be aware of
| undesirable outcomes that may be built with an correct prompt, or
| knowing that the impact of a hammer can be finely tuned to the
| target.
|
| Formal education is the trailing system far behind the workers,
| innovators, designers, and experimenters.
| ford wrote:
| Perhaps this is another of the early (?) nails in the coffin for
| traditional higher education.
|
| If it becomes harder to assess if someone learned something (with
| a grade), the results of that assessment (GPA) become less
| valuable. Software has traditionally been at the forefront of
| allowing people with non-traditional backgrounds (bootcamps,
| other degrees, self-taught) to work in the highest echelon of
| jobs, because of experience outside of the classroom (open
| source, personal projects).
|
| ChatGPT and its ilk put more pressure on evaluation of candidates
| in interviews and should lend more weight to impact/experience
| based criteria on resumes (vs education-based).
|
| There is a spectrum of people using ChatGPT to cheat vs learn.
| But, ideally, "cheaters never win", so interviewers and resume
| screeners will soon be under as much pressure as educators to
| holistically evaluate candidates beyond the crutch/immediate
| signal of a degree. They're just further downstream
| yamtaddle wrote:
| I did most of a humanities degree in the early- to mid-'00s and
| the only courses that relied heavily on long-form out-of-class
| writing exercises for grades were in the actual language
| departments (English, foreign languages).
|
| The rest were big on the occasional short quiz in-class to
| check understanding, and periodic "bluebook" exams that
| involved writing the equivalent of perhaps 3-5 total pages of
| typewritten material, by hand, in one class period, in person,
| in response to perhaps a half-dozen prompts. Basically a series
| of short, to-the-point essays. Not a ton of outside-of-class
| paper composition. I doubt they'd have trouble adjusting to
| remove those all but entirely.
| Workaccount2 wrote:
| Higher education will become just an optional prep course for
| your sit down conversational AI interview.
|
| Once AI can do a good job vetting candidates, I see no reason
| for companies not to have an open applicant process where
| anyone can interview and be evaluated. If you are sharp and
| know your shit, a degree won't matter and the AI interviewer
| won't care.
|
| But this is an "All else being equal" scenario, my true belief
| is that AI will change things so radically that there is
| effectively an event horizon in the near future, impossible to
| predict whats beyond it.
| ilc wrote:
| The event horizon you describe is always there. Be it 3D
| printing, AI, moore's law... etc... The things these things
| enable, are hard to predict.
|
| Think about cloud computing. It changes the game massively
| for startups and for people who need enterprise class
| infrastructure as mere mortals.
|
| Another constant tension to show you how unpredictable all
| this is: Do you use kernel networking, let the kernel use
| hardware offloads, or goto use DPDK? The choice of what to do
| is changing as hardware changes, the kernel changes etc....
|
| ... Once you understand, that life is ALWAYS at an event
| horizon.. you understand AI is just another such event.
|
| Prediciting the future is for the native... Making the future
| is the way to go. Currrently the AI guys are doing that. But
| another thing will rise up, it always does.
| brookst wrote:
| I think it just moves the definition of "learning" to a higher
| level of abstraction: so you know what AI tools to use, how to
| prompt them, and how to understand their output?
|
| I'm reminded of the time when graphing calculators were going
| to destroy math programs because nobody would "really know" how
| to do the work. And yet here we are, and math is fine, and
| calculators are just another tool.
| giancarlostoro wrote:
| You could force tests to be done in testing centers. My college
| had these and they were strict about what you can bring, you
| get up to a whole week to show up on your own time, and you're
| only allowed a paper and pencil if anything at all, that they
| provide. Make the Final and Midterm tests worth roughly 60% of
| their grade, and it wont matter if they cheat on their
| homework.
|
| Edit:
|
| Alternatively, have students do presentations of their code
| from their homework, just as we all do peer review
| professionally. Let students learn and teach other students.
| SV_BubbleTime wrote:
| I think the edit is more the case for the near future.
|
| I think we're about to see a shift from professors running
| the same curriculum year over year not really knowing
| students that come and go on a time conveyor belt, to
| something much closer to the imagination of the parents that
| are often paying for their kids college "experience".
|
| OR - I see the tools used to cheat also being used to detect
| cheating.
|
| Hopefully both is the answer.
| spiderxxxx wrote:
| People are going to use it, so telling them not to doesn't work.
| You can demonstrate the proper way to use it, and caveats of
| using it directly without any thought, but you need to explain
| how to analyze code, and how to check that it's doing what you
| think it's doing. Also remind them that it won't come up with new
| ways to solve a problem - it's trained on how people solved
| problems in the past, but not all the possible ways to solve a
| problem. The best students probably wouldn't use it, and wouldn't
| need to, or they will use it, but know when it's giving an
| inefficient or insufficient response. Either way, you can't stop
| it, so learn to harness it.
| nextlevelwizard wrote:
| Why should anyone care?
|
| If someone wants to coast they will and it will be reflected
| later on when they cant get or hold a job since they are just as
| shit as ChatGPT or copilot
|
| And if they can get and hold an job then isnt that just better?
| josh2600 wrote:
| So most interview questions like "make a binary tree" are dead.
|
| The best interview question that will never die: "What's the
| weirdest bug you debugged? What made it weird?"
|
| For posterity: https://www.gamedeveloper.com/programming/my-
| hardest-bug-eve...
| meindnoch wrote:
| >So most interview questions like "make a binary tree" are
| dead.
|
| "Make a binary tree and explain it on this whiteboard"
| enraged_camel wrote:
| ChatGPT also has decent explanations tbh.
| MarcoZavala wrote:
| [dead]
| menzoic wrote:
| ChatGPT provides excellent explanations in multiple
| languages for all code I've seen it write.
| saghm wrote:
| How is it at writing physically on a whiteboard?
| CharlesW wrote:
| It currently requires a human for that.
| gptgpp wrote:
| I hear that it'll be an app you can run on your
| neuralink. I'm very excited to beta test it.
|
| Should be any day now just like full self driving.
| kerpotgh wrote:
| [dead]
| epicureanideal wrote:
| > "What's the weirdest bug you debugged? What made it weird?"
|
| If you can remember, 5-10 years after you solved it.
| Swizec wrote:
| Can't have been that good a bug, if you can't.
|
| Been close to 15 years since some of the horror stories I can
| tell. Hell, over 20 years for some of my favorite lessons.
| canadianfella wrote:
| [dead]
| excerionsforte wrote:
| I'd be interested in seeing how we can incorporate AI into
| interviews. Example being entry level software engineers using
| AI to jam through a small project with tests in a limited time
| span. Lazy engineers won't check the work while others will use
| whatever AI generates as their draft and correct whatever bugs
| that are in it.
|
| I believe that we should be taking advantage of this
| productivity boost across the board.
| bcantrill wrote:
| Yes, agreed! We ask a variant of this question (we call it "an
| analysis sample"); from the materials we ask candidates to
| submit[0]:
|
| "A significant challenge of engineering is dealing with a
| system when it doesn't, in fact, work correctly. When systems
| misbehave, engineers must flip their disposition: instead of a
| creator of their own heaven and earth, they must become a
| scientist, attempting to reason about a foreign world. Please
| provide an analysis sample: a written analysis of system
| misbehavior from some point in your career. If such an analysis
| is not readily available (as it might not be if one's work has
| been strictly proprietary), please recount an incident in which
| you analyzed system misbehavior, including as much technical
| detail as you can recall."
|
| These samples are very revealing -- and it feels unlikely that
| generative AI is going to be of much help, even assuming a
| fabulist candidate. (And of very little assistance on our
| values-based questions like "when have you been happiest in
| your professional career and why?").
|
| [0] https://docs.google.com/document/d/1Xtofg-
| fMQfZoq8Y3oSAKjEgD...
| epicureanideal wrote:
| To be honest, this sounds extremely difficult and not in a
| good way. That sounds like many many hours of writing work,
| to describe a problem that might be many years in the past,
| that might have been solved by extremely intricate methods
| that are easy to forget, using technologies that are now not
| commonly in use, etc.
|
| A good question to ask about each interview question might
| be: would a good liar have an easier time answering this than
| a person trying to answer honestly? And if so, retire the
| question.
| bcantrill wrote:
| Having read many, many, many answers to this question, I
| don't think that a good liar has a particularly easy time
| answering this question -- or certainly not in a way that
| gets them further consideration!
|
| And yes, it's many hours of work -- but the work itself
| that we are doing is quite hard, and if someone washes out
| in the application process because it feels unduly arduous,
| we are likely not a fit for one another.
| l33t233372 wrote:
| > I don't think that a good liar has a particularly easy
| time answering this question -- or certainly not in a way
| that gets them further consideration
|
| How would you know?
|
| > And yes, it's many hours of work -- but the work itself
| that we are doing is quite hard, and if someone washes
| out in the application process because it feels unduly
| arduous, we are likely not a fit for one another.
|
| I sincerely hope that I never accidentally apply for a
| company that thinks an unpaid, long form writing prompt
| is an appropriate interview question because the work
| happens to be hard.
| xvector wrote:
| Eh, it also excludes people that don't have spectacular
| long-term memory, or people that don't keep a diary about
| bugs that they've chased down at work. Personally, I
| think you're overfitting to fight cheating, but maybe you
| work at a desirable enough place that you can afford to
| exclude so many people but still get enough good
| candidates.
|
| IMO a good question provides the necessary context
| itself, and the candidate's thinking and reasoning skills
| are what's tested. With your question, it's basically
| turned into a competition of which candidate has tackled
| the most ridiculous/obscure/complex bug, so candidates
| aren't being judged on even footing.
| mypalmike wrote:
| Agreed completely. This problem borders on that common
| category of questions which test whether the applicant
| shares a specific fine-grained flavor of nerdiness as the
| interviewer, rather than whether the candidate is a good
| fit for the job.
| netfortius wrote:
| > The best interview question that will never die: "What's the
| weirdest bug you debugged? What made it weird?"
|
| @ChatGPT: Give me three sample answers to the following
| questions related to <insert your interview language>
| programming: "What`s the weirdest bug you debugged? What made
| it weird?"
| CamperBob2 wrote:
| "One time, I was working on a project where the UI kept
| freezing when a certain button was pressed. I spent hours
| debugging the code, but couldn't figure out what was causing
| the problem. Eventually, I discovered that the button's event
| handler was accidentally triggering an infinite loop that was
| consuming all the CPU resources, causing the UI to freeze. It
| was a weird bug because the symptoms were not immediately
| obvious and it took a lot of digging to uncover the root
| cause. I once spent an entire day trying to
| figure out why my code was behaving erratically when
| processing a certain data file. It turns out that the file
| had some hidden control characters in it that were causing my
| program to misinterpret the data. The bug was weird because I
| had never encountered a situation where hidden characters
| were causing issues before. One time, I was
| working on a codebase that had been written by multiple
| developers over a long period of time. I was debugging a
| particularly tricky issue where certain data was being lost
| between different parts of the system. After a lot of
| investigation, I discovered that one of the earlier
| developers had implemented a workaround for a different issue
| that involved storing data in a global variable that was
| being accidentally cleared by another part of the code. It
| was a weird bug because it was caused by a seemingly
| innocuous change that had been made months earlier and was
| difficult to trace back to its origin."
|
| Welp, we're boned
| alex_sf wrote:
| I mean the first and the third aren't great. So, 33% chance
| of being boned.
| razor_router wrote:
| That's interesting! What did you learn from debugging that
| weird bug?
| aeyes wrote:
| I learned that off-by one errors, mixing up arguments and
| caches are hard to debug.
|
| I have easily spent days debugging many such problems which
| were almost always solved by a one line change. And rarely
| did I find ways to prevent similar bugs in the future bugs by
| improving testing or code factoring.
| lumost wrote:
| is a hard bug the best interview question? When I do my job
| right 98% of the time is not debugging. This ratio changed
| dramatically over the course of my career.
| BoorishBears wrote:
| While I'm not complaining that people are realizing they're
| dead... why is ChatGPT the final straw for those ridiculous
| "make a binary tree" questions?
|
| Why wasn't it the fact that these questions became such a
| gameable system, that we started referring to them by the
| copyrighted name of a site where you can access nearly every
| permutation that will ever be asked of you, along with
| extremely detailed solutions with rationale:
| https://leetcode.com/
|
| It's crazy to me that of everything that ChatGPT can do,
| regurgitating well known answers to well known interview
| questions is what kills anything off...
| ramesh31 wrote:
| CS programs will have to adapt to this or die. The reality is
| that five years from now, we'll all just be talking to LLMs all
| day and compiling/checking the results. It's no different than
| the shift from assembler to higher languages in the 80s.
| fellerts wrote:
| I don't disagree, but how do you imagine they should adapt?
| "Checking the results" is difficult if you are not able to
| perform the task on your own, which means you need to learn the
| task in the first place.
| shinycode wrote:
| It's a chicken and egg problem. Can ChatGPT come up with a
| framework to write code ? Today it can only read docs and
| already developed code to create new one. Could it create a new
| langage that perfectly fit the hardware it's running on ? If
| so, there is no need anymore for any other software company.
| ChatGPT, write me an OS please, write me a photo and video
| editor. Write me a game about ... It's seems really far fetched
| because, could it really create new software with a different
| paradigm ? Something that hasn't been done before ? Given how
| it processes words from its input, it seems not.
|
| School teaches how to think. All the frameworks and some
| langages that are used by millions didn't exist back then. But
| what if the point of rupture was 10 years ago, would we been
| stuck with old and non innovative tooling and designs ? We
| still need to learn people how to think and develop skills. The
| cheater from my era never acquired good development skills.
| Today the cheater are just as good as ChatGPT so what's the
| point in hiring them ? If all you have to do is enter commands
| in a prompt, let the marketing people do it and have their real
| no-code revolution.
|
| We always going to need problem solvers and people with deep
| insights on how things work. Maybe it's the time to dig deeper
| into the knowledge and write only real meaningful code
| EVa5I7bHFq9mnYK wrote:
| So, what if we give ChatGPT code of ChatGPT, and ask to improve
| upon it? If it can do that, we are fucked.
| dcow wrote:
| It seems ChatGPT should be treated the same way institutions
| treat plagiarism. It's fine to use LLMs for inspiration but the
| work you submit must be meaningfully your own.
|
| Of course it's not perfect, it never will be. But ultimately it's
| the student that suffers when they plagiarize. Professors ought
| have this conversation with students at the same times they're
| discussing the value of learning & education. I suspect we'll
| build tools that match answers against the text that LLMs spit
| out and that will make it easy for people to detect when answers
| are being pasted verbatim from LLMs.
|
| The other broader spicier thought that comes to mind is that LLMs
| may push the Overton Window for what's considered entry level and
| that's probably ultimately good tough it won't be without some
| sacrifice. This would mean CS departments potentially need to do
| a curriculum iteration to accommodate. Perhaps there are new,
| slightly more sophisticated, entry level problems that can be
| tackled where the commodity solutions haven't yet been entombed
| in LLMs. Maybe assessments will shift to be less about
| regurgitating the solution to a common problem and instead to
| fixing problems with some localized toy implementation or some
| fictitious API that LLMs won't know about.
| corbulo wrote:
| It will be a hard sell since it or similar tech has already
| been long embraced commercially. A bit like forcing students to
| use an abacus while calculators are around.
|
| The student should still be responsible for what they submit
| and all of its errors of course. But what should really change
| is our methods of assessing students. The University model for
| undergrads has already been long broken in need of fixing. I
| don't think this will significantly impact grad students yet.
| gattilorenz wrote:
| > A bit like forcing students to use an abacus while
| calculators are around.
|
| But... we do that already. We ask students not to use
| calculators, or not to use scientific calculators, during
| exercises, exams, etc. And it's not because we want to impose
| an extra load on them, but because we think that by not using
| calculators you learn to deal with numbers, you build a
| strong foundation for thinking about math, quantities, etc.
|
| But since we're talking about essays and coding assignments
| that are graded, one of the objection is that they should not
| be graded, because the goal is to use them for learning.
|
| Sure, at the university, the ideal scenario is that students
| are there to learn and will do assignments even when they are
| not graded. That happens, but it's for a minority of them,
| because (understandably) the university is also a time for
| parties, relationships, etc, and students are young... In
| practice, many assignments are graded mostly to make sure the
| students do them, learn from them, and pass the course (maybe
| a patronizing approach, but in the end, if only 10% of the
| students pass, the teacher won't have an easy life with
| his/her superiors - but that's another story).
| dcow wrote:
| > A bit like forcing students to use an abacus while
| calculators are around.
|
| I said allow students to use ChatGPT. Just make it clear that
| pasting answers from it verbatim, just like doing so from SO,
| is called plagiarism and does not benefit the academic
| community or themselves. There will always be cheats. Agree
| about shifting evaluation methods.
| corbulo wrote:
| What about when it gives the 'best' word choice? Should a
| student change it arbitrarily?
| dr_dshiv wrote:
| Just put it into Quillbot.
| la64710 wrote:
| A lot of time is saved by chatGPT. But we need to get better at
| testing and debugging.
| medion wrote:
| Chomsky, in a recent interview, said if students are cheating
| with these tools, it's the fault of the teacher for not creating
| work that is interesting enough to be absorbed by ...
| shadowgovt wrote:
| There is nothing interesting about pointer dereferencing errors
| (or their slightly-more-turd-polished equivalent, null
| references). Absolutely nothing at all.
|
| Most programmers will spend a non-trivial amount of their
| career fussing over them, however, and any programming
| education program that doesn't at least touch on how to
| identify them and what to do is pedagogically void.
| HarHarVeryFunny wrote:
| When current and future students are out in the workforce they'll
| have access to tools like CoPilot and ChatGPT, not to mention
| plain Stack Overflow cut-n-paste, so if education is meant to be
| preparing kids for the workplace, then the logical thing to do is
| allow/teach them to use these tools.
|
| OTOH, they still do need to learn how to program and debug (more
| important than ever if you're using machine-generated code likely
| to contain bugs, and with unknown error handling), so it seems
| colleges also need to make the assignments more complex to point
| where current crop of tools can only be used to assist - not to
| nearly write all the code.
|
| It'll be interesting to see how these new tools affect hiring
| standards... Doesn't make much sense to pay someone just to get a
| tool to write the code, so maybe the bar will be raised there
| too.
| [deleted]
| xutopia wrote:
| I foresee a time when we'll write BDD tests and the AI will write
| the app.
| jimbobimbo wrote:
| All this tells us that whiteboard interviews are here to stay.
| warning26 wrote:
| With the general shift to remote-first, it's a lot more
| difficult to verify that your whiteboarder isn't ChatGPT-ing
| just off-screen.
| flangola7 wrote:
| Who says anything on the screen will be real?
|
| "Hey MultimodalGPT, look like me on a Zoom call and pass this
| interview."
|
| Text2Video is already moving quickly. Maybe in as little as
| three years a video feed will be as trustworthy as an email.
| beepbooptheory wrote:
| Just cracking myself up thinking about 20 years from now where at
| some startup the sole engineer (prompt engineer?) who has been
| trying for hours to figure out why their program doesn't work
| finally gives up and asks the model "what is the difference
| between 'let' and 'const'?" and getting a mostly verbatim
| stackoverflow answer back. A beautiful story to me.
| dcow wrote:
| What _is_ the difference between `let` and `const`?
| singularity2001 wrote:
| https://stackoverflow.com/questions/22308071/what-is-the-
| dif...
|
| The difference between let and const is that once you bind a
| value/object to a variable using const, you can't reassign to
| that variable. In other words Example:
|
| const something = {}; something = 1; // Error.
|
| let somethingElse = {}; somethingElse = 100; // This is ok
___________________________________________________________________
(page generated 2023-02-20 23:00 UTC)