[HN Gopher] Testing how hard it is to cheat with ChatGPT in inte...
       ___________________________________________________________________
        
       Testing how hard it is to cheat with ChatGPT in interviews
        
       Author : michael_mroczka
       Score  : 70 points
       Date   : 2024-01-31 17:35 UTC (5 hours ago)
        
 (HTM) web link (interviewing.io)
 (TXT) w3m dump (interviewing.io)
        
       | babyshake wrote:
       | IMO asking people to not use available tools in interviews is a
       | bad idea, unless you are trying to do a very basic check that
       | someone knows the fundamentals.
       | 
       | Allow them to use the tools, with a screenshare, and adjust the
       | types of tasks you are giving them so that they won't be able to
       | just feed the question to the LLM to give them the completed
       | answer.
       | 
       | Interviews should be consistent with what day to day work
       | actually looks like, which today means constantly using LLMs in
       | some form or another.
        
         | leeny wrote:
         | I'm not the author (perhaps he'll chime in as well), but I'm
         | the CEO of interviewing.io (we're the ones who ran the
         | experiment).
         | 
         | I think it depends on whether the interviewer has agreed to
         | make the interview "open book". Looking up stuff on Stack
         | Overflow during the interview can be OK or can be cheating,
         | depending on the constraints.
         | 
         | In this experiment, the interviews were not "open book". That
         | said, I am personally in favor of open book interviews.
        
           | michael_mroczka wrote:
           | I AM the author, and I also am in favor of "open book"
           | interviews. I'm not against ChatGPT use in interviews, but if
           | you're doing it secretly in an interview that clearly is
           | meant to be "closed book," I think it's fair to say you're
           | cheating.
        
             | pierat wrote:
             | Well that's the rub. There's no way, even for a senior
             | engineer, to know everything. In fact, one of the required
             | skills is "how to ask the question as to elicit the answer
             | in a reasonable amount of time".
             | 
             | The closed-book crap can stay closed in the universities
             | and schools demanding a regurgitation of mostly-right
             | knowledge.
             | 
             | Now... The skill of asking the right Qs also directly
             | intersects with LLMs, and how to discern good/bad
             | responses.
             | 
             | But hiding it? Yeah, probably not a good fit.
        
               | vinni2 wrote:
               | > The closed-book crap can stay closed in the
               | universities and schools demanding a regurgitation of
               | mostly-right knowledge.
               | 
               | I work at a university and most of our exams are open
               | book or project based. You probably want to update your
               | image of universities.
        
             | randomdata wrote:
             | _> an interview that clearly is meant to be  "closed
             | book,"_
             | 
             | I am not sure that is clear. It seems the expectation was
             | not "closed book", but "never opened a book before, not
             | even in the past":
             | 
             |  _" It's tough to determine if the candidate breezed
             | through the question because they're actually good or if
             | they've heard this question before."_
             | 
             | Clearly the interviewers were looking not for knowledge,
             | but for uncanny ability. How well was that communicated to
             | the interviewees?
             | 
             | It is not cheating if the rules of the game are not
             | defined.
        
             | gabrieledarrigo wrote:
             | > but if you're doing it secretly in an interview that
             | clearly is meant to be "closed book," I think it's fair to
             | say you're cheating.
             | 
             | I would argue that is the opposite: it's fair to say that
             | the interview is a cheat.
        
             | jacques_chester wrote:
             | > _I also am in favor of "open book" interviews._
             | 
             | I recall reading an interviewing.io blog post[0] in which
             | the dominant considerations interviewers weighed were (my
             | interpretation):
             | 
             | (1) Did they solve the problem optimally? (2) How fluid was
             | their coding?
             | 
             | With "communication" turning out to be basically worthless
             | for predicting hire/no-hire decisions.
             | 
             | Perception of coding fluidity seems like it would be
             | affected by how often the candidate stops and looks up
             | things like library functions or obscure syntax.
             | 
             | For that reason I've been investing time in committing a
             | lot of library functions to memory, so they instantly flow
             | from my fingers rather than spending a minute looking it
             | up.
             | 
             | It's dumb that I need to do this, but I don't make the
             | rules. I'm just at the bottom of the information cascade
             | that led to how things are done now.
             | 
             | [0] https://interviewing.io/blog/does-communication-matter-
             | in-te...
        
               | madeofpalk wrote:
               | Companies that optimise for memorising obscure stdlib
               | functions don't seem like great places to work.
        
           | BriggyDwiggs42 wrote:
           | Given your knowledge of the subject, do you think leetcode-
           | type questions are meaningfully able to appraise an
           | employee's performance in a production environment? I've
           | always thought it was basically unrelated beyond testing
           | basic coding experience.
        
             | michael_mroczka wrote:
             | The short answer is, "Yes." They are very flawed, but one
             | of the most reliable ways to avoid "bad" hires.
             | 
             | The longer answer is: Fundamentally, you need to address
             | the fact that there exist a _huge_ number of people in this
             | industry declaring they have masters degrees /phds or years
             | of industry experience, but when pressed they can't write
             | even the simplest of functions.
             | 
             | While we called it out explicitly, some folks seem to miss
             | that "Custom" questions are still fundamentally DS&A
             | leetcode-style questions. I completely agree that "leetcode
             | style" interviews are flawed, but most people don't have a
             | better answer for this problem that still guarantees the
             | person you're hiring actually can code.
             | 
             | We are optimizing for coders that make good choices
             | quickly, and if they can code efficient code to toy CS
             | problems, then you at least guarantee that 1) they can
             | actually code and 2) they can code simple things quickly.
             | Non-coding interviews allow you to hire people who can't do
             | these basic things and therefore guarantee their
             | performance is worse in a production environment.
        
         | danielvaughn wrote:
         | I'm fairly open book, but I wouldn't accept LLM usage in an
         | interview. I don't need someone to have all the facts in their
         | head, so if they have to look up some syntax or whatever, then
         | totally fine.
         | 
         | However, my style of interview is LLM-proof anyways. I have
         | "shop talk" style interviews, where I just chat with the
         | developer for an hour or so about various topics. Makes it very
         | easy to get a sense of their depth, and how interested they are
         | in the job domain.
        
           | michael_mroczka wrote:
           | Exactly. People end up throwing the baby out with the
           | bathwater on this. "Data structure & algorithm interviews
           | aren't perfect, so let's not ask people to code at all." It's
           | an absurd overcorrection, but most people think these
           | interviews are about demanding optimal code and perfection
           | when they mostly just are making sure you're not using arrays
           | when you should be using hashmaps... and that you know what a
           | hashmap is, I suppose.
        
         | elicksaur wrote:
         | > Interviews should be consistent with what day to day work
         | actually looks like, which today means constantly using LLMs in
         | some form or another.
         | 
         | Consider that this may not be typical.
        
           | ilc wrote:
           | Consider... it might be. Seriously, I work for a company very
           | protective of its IP.
           | 
           | And I can still use ChatGPT and similar tools for some of
           | what I do. It is a huge force multiplier.
        
           | CaptainFever wrote:
           | > 70% of all respondents are using or are planning to use AI
           | tools in their development process this year. Those learning
           | to code are more likely than professional developers to be
           | using or use AI tools (82% vs. 70%).
           | 
           | Source: https://survey.stackoverflow.co/2023/#ai-sentiment-
           | and-usage
           | 
           | To be fair, the number of "Yes" was "just" 43% but that's
           | still a very large amount of developers, not including those
           | who plan to use it.
        
           | madeofpalk wrote:
           | Do you consider it typical for development to look things up
           | on google, documentation websites, or stack overflow?
        
       | bluedino wrote:
       | The last couple interviews I have had, warned about using
       | ChatGPT. So it must be happening.
        
       | m1el wrote:
       | I've had a displeasure of interviewing someone who used ChatGPT
       | in a live setting. It was pretty obvious: I ask a short question,
       | and I say that I expect a short answer on which I will expand
       | further. The interviewee sits there in awkward silence for a few
       | seconds, and starts answering in a monotone voice, with sentence
       | structure only seen on Wikipedia. This repeats for each
       | consecutive question.
       | 
       | Of course this will change in the future, with more interactive
       | models, but people who use ChatGPT on the interviews make a
       | disservice to themselves and to the interviewer.
       | 
       | Maybe in the future everybody is going to use LLMs to externalize
       | their thinking. But then why do I interview you? Why would I
       | recommend you as a candidate for a position?
        
         | ptmcc wrote:
         | Yes of course! I'd be happy to answer your short question with
         | a short answer. I look forward to expanding further on the
         | answer, as you previously stated that you expect me to.
         | 
         | Jokes aside, something about LLM responses is very uncanny
         | valley and obvious.
        
           | chewxy wrote:
           | The peppy, upbeat, ultra-American tone that the LLMs produce
           | can be somewhat toned down with good prompting but
           | ultimately, it does stink of the refinement test set.
        
         | renewiltord wrote:
         | That sounds great, doesn't it? You got powerful negative
         | signal.
        
           | lcnPylGDnU4H9OF wrote:
           | It sounds like the problem is really that this is the most
           | obvious cheater. Someone better at manipulation and deception
           | might do a better job cheating the interviewer such that
           | they're hired but then be entirely inadequate in their new
           | position.
        
         | m1el wrote:
         | Oh, and to add an insult to the injury, I was using a
         | collaborative editing tool. So I was able to see the person:
         | 
         | 1) Select All (most likely followed by the copy) 2) Type the
         | answer 3) Make an obvious mistake when they type else block,
         | before the if
        
           | frabjoused wrote:
           | That was me interviewing someone yesterday. The telltale
           | select all is so cringe.
        
             | pests wrote:
             | Some people compulsively highlight what they are reading.
        
               | jcranmer wrote:
               | I'm a compulsive highlighter too, but it's generally in
               | the vein as xkcd (https://xkcd.com/1271/) and not a
               | select all. Frequently, highlighting ends up starting in
               | the middle of a word!
        
           | willsmith72 wrote:
           | i have a really annoying habit of constantly double-clicking
           | to highlight whatever i'm reading or looking at.
           | 
           | i've actually been called out for it in a systems design
           | interview, under the presumption i was copying my notes into
           | another window, but was glad they called me out so that i
           | could explain myself
        
             | storyinmemo wrote:
             | ... as I'm reading through this doing my normal random
             | highlight of text while I read...
        
         | blharr wrote:
         | The idea that spotting cheating is obvious is a case of
         | selection bias. You only notice when it's obvious.
         | 
         | Clearly, the person put 0 effort towards cheating (as most
         | cheaters would, to be fair). But slightly adjusting the prompt,
         | or just paraphrasing what ChatGPT is saying, would make the
         | issue much harder to spot.
        
           | al_borland wrote:
           | Maybe I'm a slow reader, but reading, understanding, and
           | paraphrasing the response seems like it would take enough
           | time to be awkward and obvious as well.
           | 
           | I'm not sure why anyone would want a job they clearly aren't
           | qualified for.
        
             | xboxnolifes wrote:
             | > I'm not sure why anyone would want a job they clearly
             | aren't qualified for.
             | 
             | $$$,$$$
        
               | dmoy wrote:
               | Well, five moneys at least. They might figure out and
               | fire you before you get to six moneys (but maybe they
               | won't, who knows).
        
           | irrational wrote:
           | We will have to start studying people's eyes to see if they
           | are moving as if reading text.
        
             | ThrowawayR2 wrote:
             | I predict that that will be followed shortly by a
             | mysterious sharp increase in applicants claiming to have
             | nystagmus (https://en.wikipedia.org/wiki/Nystagmus), which
             | causes random involuntary eye movements, but without any
             | medical documentation.
        
               | Volundr wrote:
               | What's interesting is this wouldn't necessarily imply
               | cheating. That doesn't sound like an issue I'd
               | necessarily draw attention to under normal circumstances,
               | but if I knew interviewers were likely to be paying close
               | attention to my eye movements I certainly would.
        
               | ThrowawayR2 wrote:
               | Yes, exactly. I have nystagmus myself because of an
               | underlying medical condition that causes other vision
               | problems and it's depressing that interviewers might
               | think it's reason for suspicion.
        
             | jurynulifcation wrote:
             | why wouldn't a cheater just pipe a generative audio model
             | through a small earbud? like that one villain from season 3
             | of westworld
        
             | Fetiorin wrote:
             | There is already an app from Nvidia that simulates constant
             | eye contact with the camera
        
         | duxup wrote:
         | I've wondered how much of the appeal of LLMs is for humans to
         | BS other humans.
        
           | foxyv wrote:
           | Considering how much time is spent on manufacturing BS for
           | consumption by bosses, professors, teachers, and advertising?
           | I think this is going to automate at least half of the work
           | office workers and students are doing now...
        
         | outside415 wrote:
         | Me and several friends have used ChatGPT in live interviews to
         | supplement answers to topics we were only learning in order to
         | bridge the gap on checkboxes the interviewer may have been
         | looking for.
         | 
         | We've all got promotions by changing jobs in the last 6 months
         | using this method.
         | 
         | You can be subtle about it if it's already an area you kind of
         | know.
        
           | jacques_chester wrote:
           | So, assuming they didn't know and approve, you cheated.
        
             | lcnPylGDnU4H9OF wrote:
             | Dirty, dirty cheater! Sounds like they would have been able
             | to perform the job duties so I'm not sure why one should
             | care.
        
               | jacques_chester wrote:
               | That someone has the skills for a job is distinct from
               | whether they are able to uphold a simple moral principle
               | like "don't cheat".
        
               | lcnPylGDnU4H9OF wrote:
               | The interviewer is full of themself if they think someone
               | who can do the job cheated in the interview.
        
               | ThrowawayR2 wrote:
               | Those who lie about one thing are likely to lie about
               | many others.
        
               | dataflow wrote:
               | There is literally not enough information to tell if they
               | can perform their job duties or not.
        
               | jurynulifcation wrote:
               | That job could have gone to someone who like actually
               | knew what they were doing and was honest lol not sure why
               | you want to defend professional and intellectual
               | dishonesty?
        
               | lcnPylGDnU4H9OF wrote:
               | > intellectual dishonesty
               | 
               | This suggestion that a person who can adequately perform
               | job duties could have even possibly cheated in their job
               | interview is intellectually dishonest. If they had to
               | cheat to get the job we should be looking at the
               | interviewer. Why did the qualified candidate have to
               | cheat? Why is whatever-they-did even considered cheating?
        
           | al_borland wrote:
           | I like when a person admits they don't know something in an
           | interview. It shows they aren't afraid to admit when they
           | don't have the answer instead of trying to lie their way
           | through it and hoping they don't get caught. Extra bonus
           | points if they look the thing up later to show they are
           | curious and want to close knowledge gaps when they become
           | aware of them.
           | 
           | People who are unwilling to say, "I don't know, let me look
           | into that," are not fun to work with. After a while it's hard
           | to know what is fact vs fiction, so everything is assumed to
           | be a fabrication.
        
             | KTibow wrote:
             | You could argue that researching it then and there proves
             | that you know how to learn stuff quick. I agree that there
             | should be disclosure though.
        
               | al_borland wrote:
               | Yeah, the disclosure is very important. It's the
               | difference between an open book test and notes written on
               | their thigh.
               | 
               | During some interviews I'd give people access to a
               | computer. If they could quickly find answers and solve
               | problems, that is a skill in itself, but I could see what
               | they were looking up. Sometimes that part would make or
               | break the interview. Some people didn't have a deep base
               | of knowledge in the area we were hiring for, but they
               | were really good at finding answers, following
               | directions, and implementing them successfully. They
               | would be easy to train on the specifics of the job. Other
               | people couldn't Google their way out of a paper bag, I
               | was shocked at how bad some people were and looking up
               | basic things. Others simply quit without even attempting
               | to look things up.
        
             | JohnFen wrote:
             | When I am interviewing candidates, one of the things that
             | I'm looking for is that the applicant is willing to say "I
             | don't know" when they don't know. That's a positive sign.
             | If they follow that up with a question about it, that's
             | even better.
             | 
             | If a candidate is trying to tap-dance or be vague around
             | something to avoid admitting ignorance of it, that's a
             | pretty large red flag.
        
           | jurynulifcation wrote:
           | I've been applying for jobs recently. Thanks for adding a new
           | factor to the competition. Super glad to know I might be
           | getting outcompeted by know-nothing assholes because I'm
           | trying to keep honest. You and your buddies can go fuck
           | yourselves. Honestly mad I've been stupid enough to try
           | competing on my own merits. What scum.
        
         | foxyv wrote:
         | To be honest, I think in the future we will interview people on
         | their ability to work with an LLM. This would be a separate
         | skill from the other ones we are looking for. Maybe even have
         | them do some fact checks on a given prompt and response as well
         | as suggest new prompts that would give better results. There
         | might even be an entire AI based section of an interview.
         | 
         | In the end, it's just a new way to "Google" the answer. After
         | all, there isn't much difference between reading off an LLM
         | response and just reading the Wikipedia page after a quick
         | Google search, except for less advertisements.
        
           | jacques_chester wrote:
           | I agree that this is the likely long term outcome. But for
           | now folks want to think that everyone needs to have memorized
           | every individual screw, nail, nut and bolt in the edifice of
           | computer science.
        
         | lmm wrote:
         | > But then why do I interview you? Why would I recommend you as
         | a candidate for a position?
         | 
         | Presumably you have tasks that you want performed in exchange
         | for money? (Or want to improve your position in the company
         | hierarchy by having more people under you or whatever).
        
         | JohnFen wrote:
         | > Of course this will change in the future, with more
         | interactive models
         | 
         | I think that what will change is that doing interviews remotely
         | will become rarer, in favor of in-person interviews.
        
       | michael_mroczka wrote:
       | I'm the author of this post. Happy to answer questions if you
       | have any. This was such a fascinating experiment!
        
       | nostromo wrote:
       | I'm glad ChatGPT could be the end of leetcode interviews.
       | 
       | I worry though that it'll just be the end of online leetcode
       | interviews and employers will bring people back into the office
       | to interview.
        
         | zeta0134 wrote:
         | Would this necessarily be a bad thing? "In the office" could
         | substitute for a video call. I always got the impression that a
         | coding challenge during an interview was much less "did you
         | memorize this solution in advance" and much more indirect, like
         | "what is your general problem solving methodology, do you ask
         | good questions, etc." Maybe I haven't been on the receiving end
         | of enough bad interviews?
        
         | nottorp wrote:
         | I wouldn't mind doing an interview face to face for a fully
         | remote job :)
         | 
         | The reverse, yes I would mind.
        
         | doctorpangloss wrote:
         | They'll change the questions.
         | 
         | No chance in hell leetcoding is going away. It will be even
         | more important, with even greater ceremony.
         | 
         | > employers will bring people back into the office to
         | interview.
         | 
         | Nothing stops people from getting the questions they'll be
         | asked ahead of time from insiders, like their friends or a
         | recruiter, which is really how people have been cheating. This
         | is how it is possible to be Google and have identical standards
         | for years but nonetheless observe overall quality of hires go
         | down.
        
           | timeagain wrote:
           | Based on my experience receiving the questions beforehand is
           | not super likely. Getting other interviewers' questions
           | beforehand /as another interviewer in the loop/ is already
           | like pulling teeth.
        
           | nostromo wrote:
           | > They'll change the questions. No chance in hell leetcoding
           | is going away.
           | 
           | I'd take that bet. Leetcoding is something that GPT 4 is very
           | good at doing -- and it does it faster than any engineer can
           | type, let alone think.
        
           | fragmede wrote:
           | > No chance in hell leetcoding is going away.
           | 
           | Hopefully it does. If an LLM can do that, why should I have
           | to do that, both in an interview or outside of one. LLM-
           | assisted programming is where it's at, and there's no going
           | back. Being able to do a leetcode isn't a good test of a
           | candidate in the first place.
        
             | diarrhea wrote:
             | We didn't have to invert binary trees outside interviews
             | before LLMs either, yet leetCode is where it's been at.
        
           | sureglymop wrote:
           | It's also because one can study and grind amd optimize for
           | leetcode. So it's really about who has the time, resources
           | (and incentive to work for free) to really grind leetcode
           | before the interview.
           | 
           | In a way it's probably equivalent to students who just bang
           | everything into their head before the semester exam and
           | forget it all again two weeks after. (Not that that's a bad
           | thing, it just doesn't really say anything about a candidate)
        
         | randmeerkat wrote:
         | People are allowed to use Google in an interview, why not
         | ChatGPT..? If you interview with a company that won't let you
         | use the tools you would use in your day job, it's not somewhere
         | worth working.
        
         | tayo42 wrote:
         | Maybe we can get like a certification authority and only leet
         | code once in person instead of doing it 2 to 3 times for every
         | company we interview with
        
         | devmor wrote:
         | I dislike whiteboard interviews in general, but I personally
         | don't particularly mind them if they're more of a pseudocode
         | approach to "how would you solve this problem algorithmically"
         | intended to show the interviewer your thought process, rather
         | than the more common "do you know how to do fizzbuzz" type
         | thing to check a box.
         | 
         | I had an interview like this recently that was quite pleasant,
         | where the interviewers and I ended up collaboratively solving
         | the problem together because we all had different approaches -
         | I think it had the unintended effect of demonstrating teamwork
         | and helped the interview go quite positively.
        
       | hijinks wrote:
       | as someone that has been remove for 10 years now and interviewed
       | a lot of people.
       | 
       | You can 100% tell when someone is reading off a screen and not
       | looking at you during an interview via webcam
        
         | michael_mroczka wrote:
         | I'm not sure if you read the post, but with some of the new
         | cheating tools that exist, they overlay the GPT responses in
         | front of your screen with concise bullet points. You wouldn't
         | even need to look away from your screen or interviewer to
         | cheat. The bullet points are also small enough to where it is
         | incredibly difficult to tell that someone is reading anything -
         | even if they have a webcam enabled and are looking right at
         | you. This, coupled with some interviewers that don't care a lot
         | about the process, it is getting easier for cheaters to slip
         | into places for sure!
        
       | nine_zeros wrote:
       | I've seen people use chrome extensions like leetbuddy.
        
       | qweqwe14 wrote:
       | The core problem with interviews is that it's basically
       | impossible to tell how well someone's going to perform on the
       | job. It's always possible to grind leetcode or whatever and make
       | it _look like_ you know what you 're talking about, by using the
       | model people can just skip that part entirely.
       | 
       | Not to mention the fact that some interviewers feel obliged to
       | ask useless cliche questions like "why do you think you are a
       | good fit for this position" yada yada.
       | 
       | Not going to be surprised if picking people based on random
       | chance (if they meet basic requirements) is going to actually be
       | better statistically than bombarding them with questions trying
       | to determine if they are good enough. Really feels like we are
       | trying to find a pattern in randomness at that point.
       | 
       | Bottom line is that if ChatGPT is actually a problem for the
       | interview process, then the process is just broken.
        
         | svachalek wrote:
         | I think the difference is effort. If someone actually bothers
         | to go grind LeetCode for a couple weeks before the interview,
         | then they have demonstrated some form of persistence and work
         | ethic at a minimum. Someone slow rolling questions with ChatGPT
         | is demonstrating pretty much the opposite.
        
         | adamredwoods wrote:
         | Without a dedicated bar exam, we have little to vet hires
         | against. Everyone is a senior engineer, until they're not.
         | 
         | I think the next evolution of technical interviews will be
         | hands-off, talking through problems where the criteria changes
         | on the fly, to prevent typing while talking.
        
         | abathur wrote:
         | There are also some of us who are just not great at
         | demonstrating intelligence by narrating our thought process
         | while under an adversarial spotlight with a timer running.
         | 
         | I realize there are time/resource problems on the interviewing
         | side, but I'd be happy to have conversations that are as long
         | and technical as it takes for an interviewer to feel like
         | they've found bedrock.
         | 
         | Whether they pass me to the next phase or not, it's frustrating
         | to spend 30 minutes or 3 hours trying to start a fire by
         | rubbing wet twigs together and never get to walk away feeling
         | like I've communicated more than a few percent of what I bring
         | to the table.
        
       | OnionBlender wrote:
       | I've wondered about cheating with a friend who can (out of ear
       | shot but can hear the call) type in the question and display the
       | result on a screen the interviewee can see. I often get stuck on
       | leetcode problems and simple hints like "O(n), prefix sum" can
       | make a huge difference. Especially if I haven't seen the problem
       | before or is having a brain fart.
       | 
       | I would still need to get good at leetcode, just not _as_ good.
        
         | blharr wrote:
         | Yea, it's always been possible to cheat. Also through searching
         | Google. Its just now, you can use ChatGPT, and it's a lot
         | easier for someone to do so.
        
       | locallost wrote:
       | Personally I am waiting for deep faked video chats with chatgpt
       | generating the answers. And maybe even questions.
        
         | jacques_chester wrote:
         | ChatGPT gushes apologies if you contradict its answers, that
         | would probably be a reliable tell for now.
        
       | tetha wrote:
       | Hm, interesting. To me, team fit, curiosity and, depending on the
       | level of seniority I'm looking for, an impression of experience
       | are the most important things in an interview.
       | 
       | The latter might look like you could fake it with ChatGPT, but
       | it'd be hard. For example, some time ago I was interviewing an
       | admin with a bit of a monitoring focus and.. it's hard to
       | replicate the amount of trust I gained to the guy when he was
       | like "Oh yeah, munin was ugly AF but it worked.. well. Now we
       | have better tech".
       | 
       | I guess that's consistent with the article?
        
         | doubled112 wrote:
         | Real world experience sometimes comes across better like that
         | than in technical Q&A.
         | 
         | One time in an interview they asked how I felt about systemd.
         | At first I thought it was a technical question, but quickly
         | realized he was just probing to see if we'd get along.
         | 
         | I got a job offer that night.
        
           | ekimekim wrote:
           | I think asking "controversial" tech questions like that can
           | be a great signal, because it steers the conversation towards
           | features of the system and discussion of tradeoffs. If the
           | question is good, then it shouldn't matter what the answer is
           | - the fact they HAVE an opinionated answer and arguments to
           | back it up is the point.
        
             | tetha wrote:
             | Yeah, and systemd is an excellent example there.
             | 
             | I can totally understand the issues of unification there,
             | and very much understand issues with poetterings
             | perfectionist attitude to some issues. But do you know how
             | much time I've spent on shitty, arcane, hand-crafted init-
             | scripts?
             | 
             | Containers as a whole would be another great question
             | there. I have a certain class of applications I wouldn't
             | want to run without a container orchestration anymore after
             | a certain scale. But on the other hand, I do have a bunch
             | of systems I'd almost never want to run as containers for
             | serious data.
        
           | dash488 wrote:
           | I asked this same question in an interview to an ex Redhat
           | employee interviewing for a Linux Admin role and their answer
           | was that they didn't know what SystemD was.
           | 
           | I think overall this is a great question to sus out if
           | someone is qualified for a role.
        
       | nottorp wrote:
       | Well, i've started to use ChatGPT instead of google when looking
       | for quickie examples for something. Mainly because of how bad
       | google has become.
       | 
       | It works fine for stuff like "give me a tutorial on how to
       | initialize $THING and talk to it" or "how do i set
       | $SPECIFIC_PARAMETER for $THING up".
       | 
       | Where it seems to fail is when you ask "how do i set $X" and the
       | answer is "you can't set $X from code". I got some pretty
       | hallucinations there. At least from the free ChatGPT.
       | 
       | So maybe add a trick question where the answer is "it can't be
       | done"? If you get hallucinations back, it should be clear what is
       | up.
       | 
       | Edit: not that I'm a fan of leetcode interviews. But then to get
       | a government job in medieval China you had to be able to write
       | essays based on Confucius. Seems similar to me.
        
         | huytersd wrote:
         | There's your problem. GPT4 is an order of magnitude better than
         | the free version, there's no comparison.
        
         | blharr wrote:
         | The problem is its difficult to track what ChatGPT can and
         | can't do. One day it'll give you junk and then an update or
         | different prompt might fix that problem.
        
           | ilc wrote:
           | That's why you look at what it says with a critical eye.
           | 
           | I think of ChatGPT like a pretty smart co-worker. Just
           | because they are smart doesn't mean they are always right.
        
             | nottorp wrote:
             | I'm using it mostly instead of API documentation. And for
             | stuff I already have an idea of.
             | 
             | Don't trust it further than that.
        
               | ilc wrote:
               | There's times when there's no good docs.... ChatGPT can
               | give you a spot to start from, even if it is wrong.
               | 
               | Trusting it blindly is stupid. But, so is trusting any
               | sources without verification.
        
           | madeofpalk wrote:
           | There's also a lot of junk answers on Stack Overflow.
        
       | lkdfjlkdfjlg wrote:
       | I've tried a couple of time using ChatGPT on a coding assignment
       | (because.... if I can NOT do it, better right?) and both times I
       | got garbage and ended up doing the coding assignment myself.
        
       | rvz wrote:
       | This conclusively tells us that the Leetcode grind has been
       | (without any dispute) been gamed to the ground and is no longer
       | an accurate measure of exceptional performance in the role. Even
       | the interviewers would struggle with the questions themselves.
       | 
       | Why waste each other's time in the interview when I (if I was the
       | interviewer) can just ask for relevant projects or commits on
       | GitHub of a major open source project and that eliminates the 90%
       | of candidates in the pool.
       | 
       | I don't need to test you if you have already made significant
       | contributions in the open. Easy assumptions can be made with the
       | very least:
       | 
       | * Has knowledge of Git.
       | 
       | * Knows how to code in X language in a large project.
       | 
       | * Has done code reviews on other people's code.
       | 
       | * Is able to maintain a sophisticated project with external
       | contributors.
       | 
       | Everything else beyond that is secondary or optional and it's a
       | very efficient evaluation and hard to fake.
       | 
       | When there are too many candidates in the pipeline, Leetcoding
       | them all is a waste of everyone's time. Overall leetcode
       | optimizes to be gamed and is now a solved problem by ChatGPT.
        
         | randmeerkat wrote:
         | > ...can just ask for relevant projects or commits on GitHub of
         | a major open source project and that eliminates the 90% of
         | candidates in the pool.
         | 
         | Get your hiring done now while you can, when the economy
         | rebounds you won't be able to hire anyone. Also give your team
         | a raise, because they'll probably be the first to go once new
         | options open up.
        
         | bpye wrote:
         | Not everyone works in the open. I do have open source side
         | projects and contributions I've made on my own time - but
         | almost everything I've done at work is closed source.
        
         | leononame wrote:
         | > can just ask for relevant projects or commits on GitHub of a
         | major open source project and that eliminates the 90% of
         | candidates in the pool
         | 
         | Not everyone spends their free time contributing (to major
         | nonetheless) to open source projects. There are a lot of great
         | engineers that have enough work on their desks with their day
         | job and there are also plenty of idiots in open source.
         | 
         | Asking for relevant projects or asking for GitHub profiles to
         | gauge relevant projects yourself is what people were already
         | doing years ago and it wasn't a great hiring strategy. Turns
         | out judging a software engineers skills is extremely hard.
        
           | angarg12 wrote:
           | This. Focusing your hiring on open source contributions
           | biases the process and misses huge slices of the software
           | engineering population.
           | 
           | I made the best work of my life (by a long long shot) to
           | private companies closed source.
        
         | vkou wrote:
         | If you want to eliminate 90% of candidates in a pool, a simpler
         | solution is to take your stack of resumes, and shred the top
         | 90% of them.
        
         | wkirby wrote:
         | > is no longer an accurate measure of exceptional performance
         | in the role
         | 
         | It never was. No real-world job performance has ever been
         | accurately measured by solving leetcode puzzles for one simple
         | reason: problem solving is only ever going to be about 50% of
         | your performance, and these puzzles don't address collaboration
         | or communication skills.
        
         | rmbyrro wrote:
         | It'd be very easy to game open contributions
        
         | ProjectArcturis wrote:
         | >relevant projects or commits on GitHub of a major open source
         | project and that eliminates the 90% of candidates in the pool.
         | 
         | I have 20 years experience in very high level data science
         | work. I do not have a public git repo because I've worked at
         | for-profit companies and I don't do additional free work in my
         | spare time.
        
       | andrewstuart wrote:
       | - Using ChatGPT is not cheating.
       | 
       | - Using an IDE is not cheating.
       | 
       | - Using StackOverflow is not cheating.
       | 
       | - Reading the documentation is not cheating.
       | 
       | I would expect candidates for programming jobs to demonstrate
       | first class ChatGPT or other code copilot skills.
       | 
       | I would also expect them to be skilled in using their choice of
       | IDE.
       | 
       | I would expect them to know how to use Google and StackOverflow
       | for problem solving.
       | 
       | I would expect programmers applying for jobs to use every tool at
       | their disposal to get the job done.
       | 
       | If you come to an interview without any AI coding skills you
       | would certainly be marked down.
       | 
       | And if I gave you some sort of skills test, then I would expect
       | you to use all of your strongest tools to get the best result you
       | can.
       | 
       | When someone is interviewed for a job, the idea is to work out
       | how they would go _doing the job_ , and doing the job of
       | programming means using AI copilots, IDEs, StackOverflow, Google,
       | github, documentation, with the goal being to write code that
       | builds stuff.
       | 
       | Its ridiculous to demonise certain tools for what reason -
       | prejudice? Fear? Lack of understanding?
       | 
       | There's this idea that when you assess programmers in a job
       | interview they should be assessed whilst stripped of their
       | knowledge tools - absolute bunk. If your recruiting process trips
       | candidates of knowledge tools then you're holding it wrong.
        
         | jddj wrote:
         | The "would" suggests the latter, but are you in this position
         | or is this hypothetical?
        
           | andrewstuart wrote:
           | I don't understand what you are asking. Are you asking if I
           | am qualified to comment on this topic? I think so yes I have
           | relevant experience in recruiting and programming and job
           | hunting.
        
             | azemetre wrote:
             | They're asking if you're a hiring manager at a company that
             | does a lot of interviews.
             | 
             | We all see people commenting how much leetcode sucks and
             | how it's not realistic, but companies that pay good money
             | still asks leetcode regardless of what the general SWE
             | public thinks.
             | 
             | The only public companies I know that give hiring managers
             | a lot of leeway in deciding their subordinates are Netflix
             | and Apple.
        
             | jddj wrote:
             | I didn't mean any offense. As the sibling comment suggests,
             | it wasn't about whether you were qualified to have an
             | opinion but rather clarifying what your opinion might be
             | representative of.
             | 
             | The comment reads differently from an applicant's point of
             | view Vs that of a hiring manager.
        
         | TheNorthman wrote:
         | > If you come to an interview without any AI coding skills you
         | would certainly be marked down.
         | 
         | And I, in turn, would be delighted not to work for you.
        
         | penjelly wrote:
         | > Using ChatGPT is not cheating.
         | 
         | id argue the way its being used, is. The audio is automatically
         | picked up from the conversation, and starts generating a
         | response with 0 user input. Ive seen users simply read off what
         | their screen says in those cases, which is most definitely
         | _not_ what an interview expects from you. Using chatgpt as a
         | tool on top of your existing skills is fine, it requires input
         | and intelligent direction from the interviewee, this is not
         | that.
        
         | rmbyrro wrote:
         | I strongly disagree.
         | 
         | Your ability to use ChatGPT effectivelly is highly dependent on
         | your technical competence.
         | 
         | The interview is meant to measure your acquired competence,
         | because this is the harder part. Learning to leverage that
         | competence using ChatGPT is very easy.
         | 
         | I'd rather have a developer on my team that demonstrates high
         | technical competence than one that is GPT-skilled, but doesn't
         | know what questions to ask GPT nor how to judge its responses.
        
         | remus wrote:
         | > There's this idea that when you assess programmers in a job
         | interview they should be assessed whilst stripped of their
         | knowledge tools - absolute bunk. If your recruiting process
         | trips candidates of knowledge tools then you're holding it
         | wrong.
         | 
         | I think this makes a lot of sense, but regardless if the
         | interviewer has specified you shouldn't be using tools to help
         | you then it is deceptive and unfair if you do.
        
         | foolfoolz wrote:
         | i agree and tell candidates this. "you can use google, chatgpt,
         | and any tool available to you as you would during the job"
         | 
         | if your questions can be answered by chatgpt (or google), you
         | are asking the wrong questions
        
           | jurynulifcation wrote:
           | "Can" or "can't"?
        
         | jurynulifcation wrote:
         | Where do you interview for? I'm sure people who don't want to
         | compete with GPT script kiddies would love to know steer clear,
         | while this is a strong positive signal that there's a jobs
         | program for GPT meat copiers.
        
         | clbrmbr wrote:
         | > I would expect candidates for programming jobs to demonstrate
         | first class ChatGPT or other code copilot skills.
         | 
         | Agree.
         | 
         | But two challenges: if the interviewer does not make it clear
         | that ChatGPT/SO may be used, the typical assumption is that
         | such use is not permitted and would be cheating.
         | 
         | Moreover, coding challenges are typically designed for humans.
         | We may need to design new kinds of interview questions and
         | methods for humans augmented by AI.
        
         | Aurornis wrote:
         | > - Using ChatGPT is not cheating.
         | 
         | > - Using an IDE is not cheating.
         | 
         | > - Using StackOverflow is not cheating.
         | 
         | > - Reading the documentation is not cheating.
         | 
         | That's not how any form of testing works.
         | 
         | The person taking the test doesn't get to determine the
         | parameters of the test. Imagine a college student pulling out
         | their cellular phone and looking up Wikipedia during their
         | final because "Wikipedia is not cheating"
         | 
         | The test is also supposed to be administered to everyone on
         | equal footing. If some candidates are substituting their own
         | definition of cheating then they're putting everyone else at a
         | disadvantage.
         | 
         | It doesn't matter what _you_ expect or how _you_ would
         | interview someone. When you participate in someone else 's
         | interview, you play by their rules. You don't substitute your
         | own.
        
       | p0w3n3d wrote:
       | Two cameras interview. One from the laptop, another from the back
       | of the interviewee head showing the whole screen
        
       | p0w3n3d wrote:
       | Also AI will make us dumb. Those of us who decide to use AI
       | extensively will get lazy, and brai removes the lazy parts of
       | knowledge as they are no longer needed. Meanwhile AI will learn
       | from internet only based on AI generated text, which as we know,
       | causes AI models to deteriorate. nobody will write anything. The
       | society collapses. We admire and worship big computer and a man
       | who can fix it. Basically a Wizard Of OZ scenario
        
       | jawr wrote:
       | Is it really cheating if they're allowed to use online tools for
       | their day to day?
        
       | ngneer wrote:
       | I am fortunate to be in a field that AI has not caught up with. I
       | interview security researchers. Would ChatGPT spot a
       | vulnerability in a function it has never seen before?
        
       ___________________________________________________________________
       (page generated 2024-01-31 23:00 UTC)