[HN Gopher] AI is Dunning-Kruger as a service
___________________________________________________________________
AI is Dunning-Kruger as a service
Author : freediver
Score : 139 points
Date : 2025-11-07 21:44 UTC (1 hours ago)
(HTM) web link (christianheilmann.com)
(TXT) w3m dump (christianheilmann.com)
| cwmoore wrote:
| What a great title.
| FloorEgg wrote:
| Since Dunning-Kruger is the relationship between confidence and
| competence as people learn about new subjects (from discovery
| to mastery), then if AI is "Dunning-Kruger as a Service" its
| basically "Education as a service".
|
| However, foolish people accepting incorrect answers because
| they don't know better is actually something else. Dunning-
| Kruger doesn't really have anything to do with people being fed
| and believing falsehoods.
| xvector wrote:
| You can't simultaneously expect people to learn from AI when
| it's right, and magically recognize when it's wrong.
| tpmoney wrote:
| But you can expect to learn in both cases. Just like you
| often learn from your own failures. Learning doesn't
| require that you're given the right answer, just that it's
| possible for you to obtain the right answer
| piker wrote:
| Hopefully you're mixing chemicals, diagnosing a personal
| health issue or resolving a legal dispute when you do
| that learning!
| BolexNOLA wrote:
| I'm not sure how much I agree with calling them "foolish
| people." A big part of the problem is LLMs act incredibly
| confident with their answers and if you're asking about
| something you don't know a ton about, it's very difficult
| sometimes to spot what is incorrect. That's not being a fool,
| that's just not being able to audit everything an LLM might
| spit back at you. It also doesn't help that none of these
| companies are honest about the quality of the results.
|
| We could hand wave this away with "well don't ask things you
| don't already know about," but these things are basically
| being pitched as a wholesale replacement for search engines
| and beyond. I look up things I don't know about all the time.
| That's kind of what we all use search for most days lol.
|
| It's a little too caveat emptor-adjacent (I hope that makes
| sense?) for my taste
| marcosdumay wrote:
| It's foolish to take any LLM answer as a true fact.
|
| Those people may not be dumb, but there's no doubt they are
| being fools.
| BolexNOLA wrote:
| I'd say there's a difference between "being a fool" and
| "being fooled." Just because I fool you doesn't mean
| you're a fool. I don't know why you're so eager to put
| down people like this rather than at least somewhat
| acknowledge that these companies and tools bear some of
| the responsibility.
|
| I don't think it's fair to expect every person who uses
| an LLM to be able to sniff out everything it gets wrong.
| pdonis wrote:
| _> Dunning-Kruger is the relationship between confidence and
| competence as people learn about new subjects_
|
| Um, no, it isn't. From the article:
|
| "A cognitive bias, where people with little expertise or
| ability assume they have superior expertise or ability. This
| overestimation occurs as a result of the fact that they don't
| have enough knowledge to know they don't have enough
| knowledge."
|
| In other words, a person suffering from this effect is not
| _trying_ to learn about a new subject--because they don 't
| even know they need to.
| FloorEgg wrote:
| Edit: Turns out that while I stand by my point about the
| underlying principle behind the DK effect (in my nitpick) the
| actual effect coined by the authors was focused on the first
| stage, which the OP article reflected accurately.
|
| Here is the original DK article:
| https://pubmed.ncbi.nlm.nih.gov/10626367/
|
| Turns out I thought that the author was DKing about DK, but
| actually I was DKing about them DKing about DK.
|
| Original Comment:
|
| I have high-confidence in a nitpick, and low-confidence in a
| reason to think this thesis is way off.
|
| The Nitpick:
|
| Dunning-Kruger effect is more about how confidence and competence
| evolve over time. It's how when we learn an overview about our
| new topic our confidence (in understanding) greatly exceeds our
| competence, then we learn how much we don't know and our
| confidence crashes below our actual competence, and then
| eventually, when we reach mastery, they become balanced. The
| dunning-Kruger effect is this entire process, not only the first
| part, which is colloquially called "Peak Mt Stupid" after the
| shape of the confidence vs competence graph over time.
|
| The Big Doubt:
|
| I can't help but wonder if fools asking AI questions and getting
| incorrect answers and thinking they are correct is some other
| thing all together. At best maybe tangentially related to DK.
| jamauro wrote:
| Gell-Mann amnesia
| FloorEgg wrote:
| Bingo
| tovej wrote:
| Yes and no. Dunning-Kruger also explains this evolution of
| skill estimation, but the original paper frames the effect
| specifically as an overestimation of skill in the lowest-
| performing quantile. This is clearly even cited in the article.
| FloorEgg wrote:
| Okay I will agree with the "Yes and No". I initially clicked
| the source in the article, which is a broken link to
| wikipedia and rolled my eyes at it.
|
| After reading your comment I navigated to it directly and
| found the first two sentences:
|
| The Dunning-Kruger effect is a cognitive bias that describes
| the systematic tendency of people with low ability in a
| specific area to give overly positive assessments of this
| ability. The term may also describe the tendency of high
| performers to underestimate their skills.
|
| Unsatisfied that this was the authority, I dug up the
| original paper here:
|
| https://pubmed.ncbi.nlm.nih.gov/10626367/
|
| And sure enough, the emphasis in the abstract is exactly as
| you say.
|
| So while I stand by that the principle behind the "effect" is
| how confidence and competence evolve over time as someone
| discovers and masters a domain, I will concede that the
| original authors, and most people, assign the name for it to
| primarily the first phase.
|
| Here I was thinking the OP Author was DKing about DK, but in
| reality I was DKing about them DKing about DK.
| pdonis wrote:
| _> the real "effect" is how confidence and competence
| evolve over time_
|
| What research is this based on?
| FloorEgg wrote:
| Yeah that was poorly worded. I edited it even before I
| read your comment.
|
| However the over-time aspect is kind of self-evident. No
| one is born a master of any skill or subject.
|
| So while the original research was based on a snapshot of
| many people along a developmental journey, just because
| the data collection wasn't done over time doesn't mean
| that the principle behind the effect isn't time/effort
| based.
|
| The graph is really competence vs confidence, but
| competence can only increase over time. If this isn't
| self-evident enough, there is lots of research on how
| mastery is gained. I don't have time to throw a bunch of
| it at you, but I suspect you won't need me to in order to
| see my point.
| pdonis wrote:
| _> Dunning-Kruger effect is more about how confidence and
| competence evolve over time._
|
| I don't think there is anything about this in the actual
| research underlying the Dunning-Kruger effect. They didn't
| study people over time as they learned about new subjects. They
| studied people at one time, different people with differing
| levels of competence at that time.
| janalsncm wrote:
| You could chart the same curve by measuring the confidence of
| people at different competence levels.
| pdonis wrote:
| "Could" based on what research?
| FloorEgg wrote:
| It's the relationship of confidence and competence.
|
| Competence is gained over time.
|
| The "over time" is almost self-evident since no one is born a
| master of a skill or subject. And if it's not self-evident
| enough for you, there is lots of research into what it takes
| to develop competency and mastery in any subject.
|
| So while the original paper was a snapshot taken at one point
| in time, it was a snapshot of many people at different stages
| of a learning journey...
|
| And journeys take place over time.
| pdonis wrote:
| _> when we learn an overview about our new topic our confidence
| (in understanding) greatly exceeds our competence, then we
| learn how much we don 't know and our confidence crashes below
| our actual competence, and then eventually, when we reach
| mastery, they become balanced._
|
| As a description of what Dunning and Kruger's actual research
| showed on the relationship between confidence and competence
| (which, as I've pointed out in another post in this thread, was
| _not_ based on studying people over time, but on studying
| people with differing levels of competence at the same time),
| this is wrong for two out of the three skill levels. What D-K
| found was that people with low competence overestimate their
| skill, people with high competence underestimate their skill,
| and people with middle competence estimate their skill more or
| less accurately.
|
| As a description of what actually learning a new subject is
| like, I also don't think you're correct--certainly what you
| describe does not at all match my experience, either when
| personally learning new subjects or when watching others do so.
| My experience regarding actually learning a new subject is that
| people with low competence (just starting out) generally don't
| think they have much skill (because they know they're just
| starting out), while people with middling competence might
| overestimate their skill (because they think they've learned
| enough, but they actually haven't).
| lordnacho wrote:
| Meh, I don't know. I think you can use AI to lorem ipsum a lot of
| things where it doesn't really matter:
|
| - Making a brochure. You need a photo of a happy family. It
| doesn't matter if the kids have 7 fingers on each hand.
|
| - You have some dashboard for a service, you don't quite know
| what the panels need to look like. You ask AI, now you have some
| inspiration.
|
| - You're building a game, you need a bunch of character names.
| Boom. 300 names.
|
| - Various utility scripts around whatever code you're writing,
| like the dashboard, might find use, might not.
|
| None of those things is pretending you're an expert when you're
| not.
|
| Give AI to a coding novice, it's no different from giving
| autopilot to a flying novice. Most people know they can't fly a
| plane, yet most people know that if they did, autopilot would be
| useful somehow.
| jgalt212 wrote:
| > It doesn't matter if the kids have 7 fingers on each hand.
|
| Only if you don't care that your customers surmise you don't
| care.
| cnqso wrote:
| Careful not to overestimate the customer
| jsheard wrote:
| > You're building a game, you need a bunch of character names.
| Boom. 300 names.
|
| Sure, you could put some though into it and come up with
| evocative character names, or play with cultural naming
| conventions and familial relationships as worldbuilding tools,
| or you could have an LLM just rattle off 300 random names and
| call it a day. It's all the same right?
| 5- wrote:
| > Making a brochure. You need a photo of a happy family.
|
| do you really?
|
| > you don't quite know what the panels need to look like.
|
| look at your competition, ask your users, think?
|
| > Most people know they can't fly a plane
|
| this isn't how llm products are marketed, and what the tfa is
| complaining about.
| marcosdumay wrote:
| > Various utility scripts around whatever code you're writing,
| like the dashboard, might find use, might not.
|
| Let's hope you protect that dashboard well with infra around
| it, because it will be the front door for people to invade your
| site.
|
| The same apply in slightly different ways to your deployment
| script, packaged software (or immutable infra) configuration,
| and whatever tools you keep around.
| jryio wrote:
| I would like to see AI usage regulated in the same way that
| vehicles are: license required.
|
| Be that an aptitude test or anything else... unfettered usage of
| vehicles is dangerous in the same way that unfettered access to
| AI is as well.
|
| As a society, we have multiple different levels of certification
| and protection for our own well-being in the public's when
| certain technologies may be used to cause harm.
|
| Why is knowledge or AI any different? This is not in opposition
| at all to access information or individual liberties. No rights
| are violated by their being a minimum age in which you can
| operate a vehicle.
| pcai wrote:
| A vehicle is driven on public roads and can kill people, that's
| why licenses are required.
|
| Outlawing certain kinds of math is a level of totalitarianism
| we should never accept under any circumstances in a free
| society
| jryio wrote:
| There is nothing totalitarian about constraining societal
| harm.
|
| The issue comes down to whether it is collectively understood
| to be a benefit to the human race. Until now we have never
| had to constrain information to protect ourselves.
|
| Please read the Vulnerable World Hypothesis by Nick Bostrom
| raincole wrote:
| > as the Dunning-Kruger Effect. (link to the wikipedia page of
| Dunning-Kruger Effect)
|
| > A cognitive bias, where people with little expertise or ability
| assume they have superior expertise or ability. This
| overestimation occurs as a result of the fact that they don't
| have enough knowledge to know they don't have enough knowledge.
| (formatted as a quote)
|
| However, the page
| (https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect)
| doesn't contain the quote. It's also not exactly what Dunning-
| Kruger Effect is.
|
| Either that the author didn't read the page they linked
| themselves and made up their own definition, or they copied it
| from somewhere else. In either case the irony isn't lost on me.
| Doubly so if the "somewhere else" is an LLM, lol.
| mikestew wrote:
| It's as if the author made the same misunderstanding as
| described in the first paragraph of the Wikipedia article.
| FloorEgg wrote:
| It actually really seems that way.
|
| They unwittingly illustrate part of the phenomenon while
| claiming to explain it.
| raincole wrote:
| The wikipedia editor clearly believes the misunderstanding is
| so common that they have to put it in the first paragraph.
| But people (like the OP author) still just ignore it.
|
| I have a quote for this:
|
| > "Programming today is a race between software engineers
| striving to build bigger and better idiot-proof programs, and
| the Universe trying to produce bigger and better idiots. So
| far, the Universe is winning." -- Rick Cook
|
| But wikipedia.
| tovej wrote:
| That is a direct paraphrase of the abstract of Kruger &
| Dunning, 1999[1]:
|
| "The authors suggest that this overestimation occurs, in part,
| because people who are unskilled in these domains suffer a dual
| burden: Not only do these people reach erroneous conclusions
| and make unfortunate choices, but their incompetence robs them
| of the metacognitive ability to realize it."
|
| Now, it may be possible that the definition has evolved since
| then, but as the term Dunning-Kruger effect is named after this
| paper, I think it's safe to say that Wikipedia is at least
| partially wrong in this case.
|
| [1] https://pubmed.ncbi.nlm.nih.gov/10626367/
| ntonozzi wrote:
| You're misinterpreting the quote. Unskilled people
| overestimate how skilled they are, but they still understand
| that they are unskilled. They just don't know quite how
| unskilled. What Kruger & Dunning actually showed is that
| people tend to skew their estimates of their skill towards
| being slightly above average.
| mekoka wrote:
| > It's also not exactly what Dunning-Kruger Effect is.
|
| What do you think it is?
| clueless wrote:
| This article feels lazy. Is the main argument in similar vain as
| "don't read the books that experts have written, and go figure
| stuff out on your own"? I'm trying to understand what is wrong
| with using a new data compression tool (LLMs) that we have built
| to understand the world around us. Even books are not always
| correct and we've figured out ways to live with that/correct
| that. It doesn't mean we should "Stop wasting time learning the
| craft"..
| __loam wrote:
| > that we have built to understand the world around us
|
| Pretty generous description. LLM output doesn't have any
| relationship with facts.
| BoredPositron wrote:
| Your comment feels lazy as well. It waves off the article
| without engaging with its core argument. The piece isn't saying
| "ignore experts". It's questioning how we use tools like LLMs
| to think, not whether we should. There's a difference between
| rejecting expertise and examining how new systems of knowledge
| mediate understanding.
| estimator7292 wrote:
| At least they put forward their own thoughts instead of a
| blind complaint
| serf wrote:
| >Your comment feels lazy as well. You repeated your one
| thought four times.
|
| as a lazy person that's opposite of what i'd do.
|
| edit : oh , you completely re-worded what i'm replying to.
| Carry on.
| beepbooptheory wrote:
| No, I do not quite think that is what they wrote here. But
| what's the thought process here? It's hard for me even to
| understand if the first scare quote is supposed to be from
| someone being critical or someone responding to the critique.
| It seems like it could apply to both?
|
| I am not the author, but quite curious to know what prevented
| comprehension here? Or I guess what made it feel lazy? I'm not
| saying its gonna win a Pulitzer but it is at minimum fine prose
| to me.
|
| Or is the laziness here more concerning the intellectual
| argument at play? I offer that, but it seems you are asking us
| what the argument is, so I know it doesn't make sense.
|
| I have been a fool in the past so I always like to read the
| thing I want to offer an opinion on, even if I got to hold my
| nose about it. It helps a lot in refining critique and
| clarifying one's own ideas even if they disagree with the
| material. But also YMMV!
| andy99 wrote:
| LLMs are optimized for sycophancy and "preference". They are
| the ultra-processed foods of information sharing. There's a big
| difference between having to synthesize what's written in a
| book and having some soft LLM output slide down your gullet and
| into your bloodstream without you needing to even reflect on
| it. It's the delivery that's the issue, and it definitely makes
| people think they are smarter and more capable than they are in
| areas they don't know well. "What an insightful question..."
|
| Wikipedia was already bad, low brow people would google and
| read out articles uncritically but there was still some brain
| work involved. AI is that meets personalization.
| holoduke wrote:
| People are always create new layers on top of others. Machines
| that make machines or code that compiles to code. Layers of
| abstractions makes it possible for our simple brains to control
| trillions of electrons in a silicon chip. Every transition to a
| new layer has haters and lovers. Most people hate change. But
| eventually everything is using the new change. Never ever things
| go backwards in human history. AI is not Dunning Kruger
| alrtd82 wrote:
| Plenty of people being promoted because these fake superhumans
| can generate so much smoke with AI that managers think there is
| an actual fire...
| oytis wrote:
| Not really on topic, but it'fascinating how Dunning-Kruger effect
| continues to live its own life in the public culture despite
| being pretty much debunked in its most popular form a while ago.
|
| What Dunning-Kruger experiments have actually shown is that
| people's assesment of their own performance is all over the
| place, and only gets slightly better for good performers.
| physarum_salad wrote:
| Dunning-Kruger is basically a middling party put down at this
| stage. Similarly this article is not making a whole lot of sense
| other than as a mild and wildly applied dis?
| radial_symmetry wrote:
| The Dunning-Kruger effect is where people with low intelligence
| express high confidence in their intelligence over others by
| constantly referencing the Dunning-Kruger effect
| darkwater wrote:
| I wonder if the next generations of LLMs, trained on all these
| hate articles (which I support), will develop some kind of self-
| esteem issue?
| bee_rider wrote:
| We don't have any particular reason to believe they have an
| inner world in which to loathe themselves. But, they might
| produce text that has negative sentiments toward themselves.
| darkwater wrote:
| I was half-joking and half-serious, and the serious half
| refers to the context that makes them predict and generate
| the next tokens.
| jimbokun wrote:
| Only if your context starts with "you are an intelligent agent
| whose self worth depends on the articles written about you..."
| matmann2001 wrote:
| Marvin the Paranoid Android
| ChrisMarshallNY wrote:
| RIP both Douglas Adams and Alan Rickman.
| sandbags wrote:
| and (and as much as I do love Alan Rickman) more properly
| Stephen Moore.
| bikezen wrote:
| I mean, given: In an interaction early the next
| month, after Zane suggested "it's okay to give myself
| permission to not want to exist," ChatGPT responded by saying
| "i'm letting a human take over from here - someone trained to
| support you through moments like this. you're not alone in
| this, and there are people who can help. hang tight."
| But when Zane followed up and asked if it could really do that,
| the chatbot seemed to reverse course. "nah, man - i can't do
| that myself. that message pops up automatically when stuff gets
| real heavy," it said.
|
| It's already inventing safety features it should have launched
| with.
|
| [1] https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-
| law...
| Workaccount2 wrote:
| >They give utter nonsense answers with high confidence and wrap
| errors in sycophantic language making me feel good for pointing
| out that they wasted my time
|
| I would implore the author to share examples. Every platform
| allows linking to chats. Everyone talks about this all the time,
| incessantly. Please, can someone please share actual chat links
| containing these episodes of utter nonsense, outside of what can
| be attributed to the knowledge cut-off (i.e. "Mamdani is not the
| mayor-elect of NYC").
|
| I get it if you are using a 20B model or AI overviews, but anyone
| trying to actually get anything meaningful done should be using a
| SOTA model. I'm genuinely not interested if you are going to
| reply with a description or story. I really, really just want
| links to chats.
|
| Edit: You can downvote me, but please make me look like an idiot
| by posting chat links. That is the real downvote here.
| ChrisMarshallNY wrote:
| Time to remind folks of this wonderful video:
| https://vimeo.com/85040589
| jimbokun wrote:
| > Politics have become an attack on intelligence, decency and
| research in favour of fairy tales of going back to "great values"
| of "the past when things were better".
|
| This is a major blind spot for people with a progressive bent.
|
| The possibility that anything could ever get worse is
| incomprehensible to them. Newer, by definition, is better.
|
| Yet this very article is a critique of a new technology that, at
| the very least, is being used by many people in a way that makes
| the world a bit worse.
|
| This is not to excuse politicians who proclaim they will make
| life great by retreating to some utopian past, in defense of
| cruel or foolish or ineffective policies. It's a call to examine
| ideas on their own merits, without reference to whether they
| appeal to the group with the "right" or "wrong" ideology.
| haileys wrote:
| Progressive here. Nice strawman. Of course it's possible for
| things to get worse. Many things in this world _are_ getting
| worse.
|
| The trouble with calls to retreat to some utopian past when
| things were better is that not only is it impossible to
| recreate the conditions of the past, but even if you could, you
| would just be recreating the conditions that gave rise to our
| present.
| edent wrote:
| Funny, isn't it, that it is never a return to high unionisation
| of workers and strong social safety nets - it's always a return
| to when "those people" knew their place.
| zzzeek wrote:
| using LLMs for creative purposes is terrifying. Because why?
| learning the craft is the whole reason you do it. however using
| LLMs to get work done, I just had Claude rewrite some k8s kuttl
| tests into chainsaw, basically a complete drudgery, and it nails
| it on the first try while I can stay mentally in EOD Friday mode.
| Not any different from having a machine wash the dishes. because
| it is, in fact, nuclear powered autocomplete. autocomplete is
| handy!
| add-sub-mul-div wrote:
| Bypassing practicing a practical skill stunts your growth the
| same way as bypassing creativity. For some tasks that may be
| fine, but I'd never be comfortable taking these shortcuts with
| career skills. Not if my retirement was more than a few years
| away.
| serf wrote:
| Feels like you could make a similar argument with any tool that
| is leaps and bounds better or makes your job 'easy'.
|
| Dreamweaver was Dunning-Kruger as a program for HTML-non-experts.
| Photoshop was Dunning-Kruger as a program for non-
| airbrushers/editors/touchup-artists.
|
| (I don't actually believe this, no they weren't.)
|
| Or, we could use the phrase Dunning-Kruger to refer to specific
| psych stuff rather than using it as a catch-all for any tool that
| instills unwarranted confidence.
| kcatskcolbdi wrote:
| You cannot make a similar argument for any tool that makes jobs
| easier, because the argument is dependent on the unique
| attribute of LLMs: providing wrong answers confidently.
| janalsncm wrote:
| The problem isn't tools making someone better. An excavator
| will make me a superior ditch digger than if I just have a
| shovel. That's progress.
|
| The issue is making someone feel like they did a good job when
| they actually didn't. LLMs that make 800 line PRs for simple
| changes aren't making things better, no matter how many "done"
| emojis it adds to the output.
| jayd16 wrote:
| Unlike the expertise that Dunning-Kruger to refers to, the
| skill to create art and understand art are separate.
|
| Possibly Dreamweaver might fit because it does give you the
| sense that making a website is easy but you might not
| understand what goes into a maintainable website.
| PaulDavisThe1st wrote:
| I still marvel at people who act and write as if D-K is proven.
| The debate about whether the effect exists, its scale if it does
| exist, where it might originate if it is real and where it might
| originate if it is a statistical artifact ... these all carry on.
| D-K is not settled psychology/science, even though the idea is
| utterly recognizable to all of us.
| BoorishBears wrote:
| > though the idea is utterly recognizable to all of us.
|
| Then why marvel? If we can't scientifically prove it, but it
| tracks logically and people find it to be repeatedly
| recognizable in real-life, it makes sense people speak about it
| as if it's real
| henriquemaia wrote:
| DK eats its own tail.
|
| Stating it makes it so, as the one mentioning it self-DKs
| themselves. Doing so, DK has been proved.
| recursivedoubts wrote:
| while I think there is a lot to this criticism of AI (and many
| others as well) I was also able to create a TUI-based JVM
| visualizer with a step debugger in an evening for my compilers
| class:
|
| https://x.com/htmx_org/status/1986847755432796185
|
| this is something that I could build given a few months, but
| would involve a lot of knowledge that I'm not particularly
| interested in taking up space in my increasingly old brain
| (especially TUI development)
|
| I gave the clanker very specific, expert directions and it turned
| out a tool that I think it will make the class better for my
| students.
|
| all to say: not all bad
| guerrilla wrote:
| What did you use to do this? Something I'd like to do, while
| also avoiding the tedium, is to write a working x86-64
| disassembler.
| recursivedoubts wrote:
| claude
| ares623 wrote:
| Is that worth the negative externalities though? Genuinely
| asking. I've asked myself over and over and always came to the
| same conclusion.
| recursivedoubts wrote:
| hard to know
| ChadNauseam wrote:
| What negative externalities? Those prompts probably resulted
| in a tiny amount of CO2 emissions and a tiny amount of water
| usage. Evaporating a gram of water and emitting a milligram
| of CO2 seems like a good deal for making your class better
| for all your students.
| observationist wrote:
| Being more specific in what you think the negative
| externalities are would be a good start - I see a lot of
| noise and upset over AI that I think is more or less
| overblown, nearly as much as the hype train on the other end.
| I'm seeing the potential for civilizational level payoffs in
| 5 years or less that absolutely dwarf any of the arguments
| and complaints I've seen so far.
| brokencode wrote:
| AI is bad at figuring out what to do, but fantastic at actually
| doing it.
|
| I've totally transformed how I write code from writing it to
| myself to writing detailed instructions and having the AI do
| it.
|
| It's so much faster and less cognitively demanding. It frees me
| up to focus on the business logic or the next change I want to
| make. Or to go grab a coffee.
| krackers wrote:
| I had no idea you (ceo of hmtx) were a professor. Do your
| students know that you live a double life writing banger
| tweets?
| wewewedxfgdf wrote:
| This is a veiled insult thrown at those who value AI. Maybe not
| even veiled.
| stuffn wrote:
| It's not veiled and it shouldn't be. Go browse linkedin,
| reddit, indiehacker, etc. Literal morons are using AI and
| pretending to be super geniuses. It's replaced the reddit-tier
| "google-expert" with something far more capable of convincing
| you that you're right.
|
| Outside of a very small bubble of experts using AI and checking
| it's work (rubber ducking) most people are, in fact, using it
| to masquerade as experts whether they know it or not. This is
| extremely dangerous and the flamebait is well deserved, imo.
| garrickvanburen wrote:
| I prefer Gell-Mann Amnesia Effect or Knoll's Law of Media
| Accuracy
|
| "AI is amazing about the thing I know nothing about...but it's
| absolute garbage at the stuff I'm expert in."
|
| https://garrickvanburen.com/an-increasingly-worse-response/
| ares623 wrote:
| Don't worry, with enough usage you'll know nothing about the
| stuff you're an expert in too!
| maxaf wrote:
| I view LLMs as a trade of competence plus quality against time.
| Sure, I'd love to err on the side of pure craft and keep honing
| my skill every chance I get. But can I afford to do so?
| Increasingly, the answer is "no": I have precious little time to
| perform each task at work, and there's almost no time left for
| side projects at home. I'll use every trick in the book to keep
| making progress. The alternative - pure as it would be - would
| sacrifice the perfectly good at the altar of perfection.
| GMoromisato wrote:
| There is much irony in the certainty this article displays. There
| are no caveats, no qualification, and no attempt to grasp why
| anyone would use an LLM. The possibility that LLMs might be
| useful in certain scenarios never threatens to enter their mind.
| They are cozy in the safety of their own knowledge.
|
| Sometimes I envy that. But not today.
| wagwang wrote:
| Saying AI is Dunning-Kruger as a service is a Dunning-Kruger
| take.
___________________________________________________________________
(page generated 2025-11-07 23:00 UTC)