[HN Gopher] Hackerrank was broken - but now it's harmful
___________________________________________________________________
Hackerrank was broken - but now it's harmful
Author : lunarcave
Score : 39 points
Date : 2024-11-27 20:54 UTC (2 hours ago)
(HTM) web link (segfaulte.mataroa.blog)
(TXT) w3m dump (segfaulte.mataroa.blog)
| yieldcrv wrote:
| I think its time for the federal government to get involved, and
| it already has leverage
|
| If your company benefits or plans to benefit from QSBS tax
| treatment, then evaluate engineers on things they'll be doing in
| the company:
|
| the application development process, sprint ceremonies,
| asynchronous communication and tooling
|
| they wont be implementing data structures often, and if they are
| reinventing the wheel they are wasting company time, if your
| company needs a more efficient data structures for its super
| scalable problem - it should trigger a process audit to be
| honest.
| 123yawaworht456 wrote:
| looking forward to Leetcode Prohibition Act of 2029
| henry2023 wrote:
| First let them get rid of non compete agreements. Haha.
| jitl wrote:
| I don't understand how QSBS has anything to do with interview
| practices. Are you saying the IRS should somehow audit any QSBS
| company to make sure they're _not_ implementing any data
| structures or algorithms? Like, new companies must be audited
| if they try to use computer science?
| jorblumesea wrote:
| Don't onsites or virtual onsites also have coding rounds? would
| be pretty easy to figure out who is using llm and similar tools.
| rich_sasha wrote:
| IME you screen people online, get some reasonable candidates,
| get them onsite and they can't code to save their life. But
| it's your problem then.
| naet wrote:
| The author says "whiteboard tests" are broken, but it seems like
| they're arguing that online coding assessments are broken, not in
| person interviews using an actual whiteboard.
|
| Doing an in person interview on a whiteboard sidesteps the AI
| issue. As someone who's done a large number of remote interviews,
| there are some clear signs that some candidates try to cheat
| online tech interviews. I wonder if the trend will fuel more of
| returns to the office, or at least a return to in-person
| interviewing for more companies.
| bsder wrote:
| > or at least a return to in-person interviewing for more
| companies.
|
| This has been broken for a while now, and companies _still_
| haven 't reset to deal with it. The incentives to the contrary
| are too large.
| unavoidable wrote:
| The disincentives are huge though. Hiring a bad employee is a
| very expensive problem and hard to get rid of.
| ipaddr wrote:
| Isn't it as simple as going on pip for fangs, a short
| conversation for a founder of a startup and a few weeks
| notice pay?
| viraptor wrote:
| That comes after the decision that you can't fix the
| situation, which comes after you discovered that the hire
| was bad, which comes after a number of _visible_
| failures. That 's a lot of wasted time/effort, even if
| the firing itself is simple.
| gopher_space wrote:
| The cost of hiring, firing, rehiring approximates the
| position's yearly salary.
| jamesfinlayson wrote:
| Depends on the country I think - in Australia at least it
| seems like you can sue for unfair dismissal if you're
| angry about being kicked out, so HR departments only seem
| to get rid of someone as a last resort.
| deprecative wrote:
| In my area they just tell you to leave. No warning. No
| severance. Midwest US.
| ipaddr wrote:
| Is using AI cheating when it's part of the job now. Is not
| using AI signalling inexperience in the llm department.
| finnthehuman wrote:
| Yes, obviously. Cheating is subverting the testers intent and
| being dishonest about it. Not just what a lawyer can weasel
| word their way around.
| gopher_space wrote:
| It's not dishonest, it's just business. I'm under the exact
| same burden of truth as the company interviewing me; zilch.
| IshKebab wrote:
| It's cheating if you don't say you're using it.
| hmottestad wrote:
| At some point I assume that it'll be so normal that you'll
| almost have to say when you're not using it.
|
| I don't need to say that I'm using a text editor, instead
| of hole punched cards. It's also quite common to use an IDE
| instead of a text editor these days in coding interviews.
| When I was a student I remember teachers saying that they
| considered an IDE as cheating since they wanted to test our
| ability to remember syntax and to keep a mental picture of
| our code in our heads.
| chefandy wrote:
| I wonder if OpenAI/Google/Microsoft, et al would hire a
| developer that leaned heavily on ChatGPT, etc to answer
| interview questions? Not that I expect them to have ethical
| consistency when there are much more important factors
| (profit) on the table, but after several years of their
| marketing pushing the idea that these are 'just tools' and
| the output was tantamount to anything manually created by the
| prompter, that looks pretty blatantly hypocritical if they
| didn't.
| zamalek wrote:
| Amazon uses Hackerrank and explicitly says not to use LLMs.
| In that case it would be cheating. However, given that
| everyone is apparently using it, I now feel dumb for not
| doing so.
| deprecative wrote:
| They made tools to make us redundant and are upset we're
| forced to use those tools to be competitive.
| ChrisMarshallNY wrote:
| That's actually a valid question. It looks like it was an
| unpopular one.
|
| Personally, I despise these types of tests. In 25 years as a
| tech manager, I never gave one, and never made technical
| mistakes (but did make a number of personality ones -great
| technical acumen is worthless, if they collapse under
| pressure).
|
| But AI is going to be a ubiquitous tool, available to pretty
| much everyone, so testing for people that can use it, is
| quite valid. Results matter.
|
| But don't expect to have people on board that can operate
| without AI. That may be perfectly acceptable. The tech scene
| is so complex, these days, that not one of us can actually
| hold it all in our head. I freely admit to having powerful
| "google-fu," when it comes to looking up solutions to even
| very basic technical challenges, and I get excellent results.
| paxys wrote:
| Copy pasting code from ChatGPT doesn't mean you have any kind
| of understanding of LLMs.
| scarface_74 wrote:
| If your coding assessment can be done with AI and the code that
| the candidate is expected to write can't be, doesn't that by
| definition mean you are testing for the wrong thing during your
| coding interview?
| LeftHandPath wrote:
| I recall having to implement A* to search a nxn character grid in
| my AI course a few years ago. It took me close to a full day to
| wrap my head around the concepts, get used to python (we usually
| worked in C++), and actually implement the algorithm. Nowadays,
| an LLM can spit out a working implementation in seconds.
|
| I think that's a big part of the issue with tests like Hackerrank
| - LLMs have been trained on a lot of the DSAs that those
| questions try to hit. Whereas, if you ask an LLM for a truly
| novel solution, it's much more likely to spit out a garbled mess.
| For example, earlier today, Google's search AI gave me this
| nonsense example of how to fix a dangling participle:
|
| > To correct a dangling participle, you can revise the sentence
| to give the dangling modifier a noun to modify. For example, you
| can change the sentence "Walking through the kitchen, the smoke
| alarm was going off" to "Speeding down the hallway, he saw the
| door come into view".
|
| LLMs have effectively made it impossible to test candidates for
| crystalline intelligence (e.g. remembering how to write specific
| DSAs quickly) remotely. Maybe the best solution would be to
| measure fluid intelligence instead, or front-load on
| personality/culture assessments and only rigorously assess coding
| ability in-person towards the end of the interview cycle.
| badgersnake wrote:
| So if I asked you how A* works in an interview you you be able
| to explain it. Johnny ChatGPT would not.
| deprecative wrote:
| In most cases it really doesn't matter. You wanted A* and you
| got it. Understanding isn't important if the product works.
| nkrisc wrote:
| Seems a problem that will sort itself out if companies are in
| fact hiring under-qualified cheaters.
| acjohnson55 wrote:
| The dirty secret of hiring processes is that the main goal of the
| earliest stages is to get from a lot of applications to something
| less than a lot, _not_ to screen for fit. This has long been
| true, and it is now super true.
|
| Every job posting gets flooded with hundreds of applicants, if
| not into the thousands. Most of those people are coming through
| the front door with no one to vouch for them and probably nothing
| on their resume that makes them a must-see candidate.
|
| Most managers and HR teams probably don't even explicitly think
| about it this way, but by pure pragmatism have evolved processes
| that act as flow control.
|
| The unacknowledged result is that the company will reject 90+% of
| applicants, regardless of fit, under the assumption that the
| filtering process will allow enough good people into the actual
| interview rounds that the team will be able to find someone they
| want. From this perspective, Hackerrank is not broken, it's doing
| exactly what is required of it by companies.
|
| I say all this because people who are in job search processes
| should frame the process accurately in their mind. It hopefully
| will help with not taking the process so personally or not
| getting so infuriated with it. It may also help you strategize
| how to find your way into the roles you want, if crushing these
| tests isn't your strong suit. People who are vouched for get to
| bypass all of this. The more confidently you are vouchced for by
| a trusted party, the more benefit of the doubt you get in the
| hiring process.
|
| One might ask is there a better way to do this? Probably so. But
| if it were easy, it would already exist.
| drjasonharrison wrote:
| There would need to be an easier and cheaper way to filter
| applications.
|
| One option for candidates is networking, this gets you in
| through the "vouched" side door.
|
| This potentially means that the company should be encouraging
| employees with more than financial incentives to find
| candidates and recommend them. This means networking workshops,
| time swaps for attending networking/recruiting events,
| understanding the need to make synchronous contact with people
| who might be good candidates.
|
| If you are interviewing based on what you do, and what the job
| application (which has been mutated to get through HR's posting
| requirements, but don't understand why and how you should be
| interviewing you are more likely to bring your past experience
| and biases to the interview. This is bad for your company and
| for candidates.
| mewpmewp2 wrote:
| From hiring perspective all of that just sounds like tons of
| more work to find any candidates.
| deprecative wrote:
| The end target is automation. The cost of doing business is
| the cost of doing business until then.
| scarface_74 wrote:
| And then if you don't know the "right people" you will never
| get a job.
|
| It's like VCs who want to pattern match for someone who looks
| like Zuckerberg
| alephxyz wrote:
| >Since everyone started using AI, more candidates started
| clearing the first round with flying colors. The platforms had to
| recalibrate to let in their target percentage.
|
| Is there any proof or data on this? Not saying it's wrong but I'm
| curious how big the effect actually is.
| Eumenes wrote:
| i screen candidates all the time (not involved in technical
| rounds) and notice chatgpt cheating all the time but its easy to
| spot because the candidates read off answers in mechanical ways
| right after a flurry of typing
| clark010 wrote:
| AI is likely to shape the future, gradually embedding itself
| across industries, fields, processes, tools, and systems. Similar
| to the introduction of electricity, where identifying practical
| applications was inevitable, trial and error will be an essential
| part of its evolution. While concerns about its impact are
| understandable, it's important to recognize that each generation
| often worries about the capabilities or standards of the next.
| However, such concerns are typically rooted in current frameworks
| of thought, which may not remain relevant in the future.
| swatcoder wrote:
| Right now, generative AI looks to be more like microwave ovens
| than electricity -- usefully novel but not obviously
| revolutionary or all-pervasive.
|
| Even with the remarkable convenience and new opportunities that
| microwave ovens brought to kitchens, it still made a lot of
| sense for commercial kitchens and culinary schools to keep
| focused on more traditional fundamentals and their mastery when
| hiring and training candidates.
|
| Same applies to our field for the foreseeable future.
| paxys wrote:
| Is it already that time of the week?
|
| People were complaining about whiteboard coding interviews in the
| 1990s and they are complaining about it today. Meanwhile the tech
| industry has managed to hire _millions_ of incredibly smart
| people and done pretty well for itself in that period. The
| interview process isn 't going to change. There is no reason for
| it to change. Your options are to either suck it up and brush up
| on basic data structures and algorithms or interview at places
| that don't test for it. Just don't hold your breath waiting for
| the world to change for you.
| cute_boi wrote:
| please bring in person interview. I am sick and tired of
| cheaters. They rig every thing ....
___________________________________________________________________
(page generated 2024-11-27 23:01 UTC)