[HN Gopher] Unskilled and unaware: Misjudgments rise with overco...
___________________________________________________________________
Unskilled and unaware: Misjudgments rise with overconfidence in low
performers
Author : aiNohY6g
Score : 46 points
Date : 2024-06-17 19:07 UTC (3 hours ago)
(HTM) web link (www.frontiersin.org)
(TXT) w3m dump (www.frontiersin.org)
| aiNohY6g wrote:
| Had to shorten the original title, which is: "Unskilled and
| unaware: second-order judgments increase with miscalibration for
| low performers"
| throwaway48476 wrote:
| The edited title is more accurate.
| contingencies wrote:
| "Experts, trying to learn, criticize those actually learning"
| leshokunin wrote:
| "Overestimation and miscalibration increase with a decrease in
| performance"
|
| "Common factor: participants' knowledge and skills about the task
| performed."
|
| I understand the corporate use case. Justifying impact of low
| performers and quantifying the potential results.
|
| Still, this kind of research feels tautological. It'd be
| surprising if anyone actually wondered if adding more low
| performers helped anything.
|
| Even in tasks that require no skill, adding a person who isn't
| performing means they won't perform well.
| surfingdino wrote:
| You cannot increase the number of wits by multiplying half-
| wits.
| psunavy03 wrote:
| https://s3.amazonaws.com/theoatmeal-
| img/comics/idiocy/oatmea...
| austin-cheney wrote:
| The problem in software is not that Dunning-Kruger exists, but
| the frequency with which it exists and how that frequency
| corresponds to Dunning-Kruger related research.
|
| Most research in Dunning-Kruger related experiments makes a
| glaring assumption that results on a test are evenly distributed
| enough to divide those results into quartiles of equal numbers
| and the resulting population groups are both evenly sized and
| evenly distributed within a margin of error.
|
| That is fine for some experiment, but what happens in the real
| world when those assumptions no longer hold? For example what
| happens when there is a large sample size and 80% of the tested
| population fails the evaluation criteria? The resulting quartiles
| are three different levels of failure and 1 segment of acceptable
| performance. There is no way to account for the negative
| correlation demonstrated by high performers and the performance
| difference between the three failing quartiles is largely
| irrelevant.
|
| Fortunately, software leadership is already aware of this problem
| and has happily solved it by simply redefining the tasks required
| to do work and employing heavy use of external abstractions. In
| other words simply rewrite the given Dunning-Kruger evaluation
| criteria until enough people pass. The problem there is that it
| entirely ignores the conclusions of Dunning-Kruger. If almost
| everybody can now pass the test then suddenly the population is
| majority over-confident.
| Joel_Mckay wrote:
| "software leadership is already aware of this problem"
|
| What makes you so sure? In general, most security
| certifications HR gets excited about aren't worth the paper
| they are printed on.
|
| Process people by their very nature are an unsustainable part
| of a poisoned business model.
|
| The other misconception is a group of persistent well-funded
| knuckle-dragging troglodytes are somehow less likely to
| discover something Einstein overlooked.
|
| https://en.wikipedia.org/wiki/Illusion_of_control#By_proxy
| bluSCALE4 wrote:
| I've had the opposite problem. I'm a front end dev and have
| worked with a lot of full stock people: none that I really
| respect. I recently came across a real personable one but at the
| end suffered from the same issues: believes his acquired
| knowledge as a backend dev transfers over to full stack. I have
| my own flaws but am very self aware: I don't implement anything
| shiny unless I thoroughly review the dom validity,
| responsiveness, accessibility, then finally functionality. Most
| people only review functionality and it's sad.
| adobkin wrote:
| Suppose you give a test to a room full of perfectly average
| B-grade students who know they are average B-grade students. Most
| will get a B but a few will do a little bit better and a few will
| do a little bit worse.
|
| Now, you focus in on everyone who got a C and you find that
| everyone who got a C estimated themselves as a B student. From
| this you conclude that low performers overestimate their ability.
|
| Then you look at the A students and find that they all also
| thought they were B students. You conclude that high performers
| underestimate their ability.
|
| But this is just a statistical artifact! It'a called regression
| to the mean and this study does not account for it. If you
| isolate low-performers out of a larger group you will pretty much
| always find that they expected they would do better (which they
| were right to expect). You are just doing statistics wrong!
___________________________________________________________________
(page generated 2024-06-17 23:01 UTC)