[HN Gopher] Princeton CS Prof: ChatGPT Is a Bullshit Generator (...
       ___________________________________________________________________
        
       Princeton CS Prof: ChatGPT Is a Bullshit Generator (2022)
        
       Author : 1vuio0pswjnm7
       Score  : 56 points
       Date   : 2023-02-01 21:49 UTC (1 hours ago)
        
 (HTM) web link (aisnakeoil.substack.com)
 (TXT) w3m dump (aisnakeoil.substack.com)
        
       | puma_ambit wrote:
       | The problem with a lot of academics is that they think everything
       | has to be perfect. That's just not how the real world works.
       | People have seen that it can be useful and that's all it is for
       | now it's up to humans to decide whether it provides value or not.
       | And yes, it will generate bullshit from time to time but like
       | anything it'll get better. I remember when they used to say the
       | Internet wasn't very useful too, look how that turned out.
        
         | Jeff_Brown wrote:
         | Generally agreed, but there is a certain hyperventilating
         | demographic that needs to understand the limitation of today's
         | AI.
        
         | allknowingfrog wrote:
         | Did you read the article? It's a fairly balanced evaluation of
         | the strengths and weaknesses of LLMs in general, and ChatGPT in
         | particular.
        
       | hnthrowaway0315 wrote:
       | Considering many jobs need certain amount of bsing, I can see the
       | bright future of ChatGPT.
        
         | Iwan-Zotow wrote:
         | should easy replace all politicians
        
       | LesZedCB wrote:
       | can this title be de-editorialized?
       | 
       | > ChatGPT is a bullshit generator. But it can still be amazingly
       | useful
        
         | dsabanin wrote:
         | Ah, this title makes more sense. After all, humans are bullshit
         | generators to, and can occasionally be useful.
        
           | bamboozled wrote:
           | So the thing is, we don't want an automated bullshit
           | generator ?
        
             | teawrecks wrote:
             | Our scientists were so preoccupied with whether they
             | could...
        
             | dsabanin wrote:
             | If seriously, it definitely does not just generate
             | bullshit, and there is no indication that it's not going to
             | improve dramatically in a few years, augmented by other
             | models. You have to start somewhere. Google and others
             | apparently have more powerful models already, but they
             | don't release them. I remember days when reliable voice
             | recognition was thought of as nearly impossible, and here
             | we are. I believe that the "bullshit" part is a brief,
             | temporary phase.
        
               | orwin wrote:
               | I mean, I used Copilot more, but it's 99% bullshit. The
               | only Ai I paid for was Wolfram Alpha and that's the only
               | AI that outperform me.
               | 
               | Even for dumb react code, it will use classes rather than
               | hooks. It's basically useless with dynamically typed
               | languages and use old syntaxes. I do not see the use case
               | right now. At all.
        
       | civilized wrote:
       | The authors hypothesize that ChatGPT is useful for
       | 
       | > Tasks where it's easy for the user to check if the bot's answer
       | is correct, such as debugging help.
       | 
       | I would qualify this fairly heavily. If your bug is "my program
       | is throwing an error" or "FizzBuzz isn't producing the expected
       | output", sure, a bot suggestion can be tested easily. But that's
       | only the easiest kind of debugging, and without any deep
       | appreciation of logic, I suspect it would tend to give you overly
       | specific remedies that mask the problem or make new problems.
       | 
       | "You want it to say FizzBuzz when n=15? How about if n=15 print
       | FizzBuzz"
       | 
       | In other words, what can be tested easily for correctness depends
       | very heavily on how strong a grasp the user has on correctness in
       | the domain. A novice without a good bullshit detector could be
       | left worse off than if they had asked a person. It's not clear
       | that the set of problems whose solution can _truly_ be checked
       | easily is all that big or useful. (I think our CS professor
       | friends here are overgeneralizing from intuitions about P vs NP,
       | a totally different context.)
       | 
       | My two cents: it will be mainly useful as a different kind of
       | search engine that can help you remember syntax, what does what,
       | what parameters this thing needs, etc.
        
         | sublinear wrote:
         | b-but... I thought being technically correct was the best kind
         | of correct! /s
         | 
         | Seriously though thanks for articulating the tree-like nature
         | of truth. Being an expert requires the ability to understand a
         | topic at all levels in a consistent manner. ChatGPT just models
         | the language, not the actual concepts.
        
       | tshaddox wrote:
       | In other words, it disrupts classroom assignments where the
       | student is being asked to produce bullshit _but_ where
       | historically the easiest way to produce that bullshit was via a
       | process that was supposedly valuable to the student 's education.
       | The extent to which a teacher cannot distinguish a human-
       | generated satisfactory essay from an essay generated by a
       | bullshit generator is by definition _precisely_ the extent to
       | which the assignment is asking the student to generate bullshit.
       | This will certainly require a lot of reworking of these
       | traditional curriculums that consist heavily of asking the
       | student to generate bullshit, but maybe that wasn 't the best way
       | to educate students this whole time.
        
         | Erik816 wrote:
         | If it can generate working code, does that mean that asking
         | students to produce working code was bullshit? Or does it just
         | mean that AI can now do a lot of things we used to asked
         | students to do (for probably solid educational reasons) at an
         | above average level?
        
           | tshaddox wrote:
           | If it's reliably generating working code then that isn't
           | bullshit! (Ignoring, of course, other things about the code
           | that might be relevant to the assignment, like coding style
           | or efficiency.) What I'm saying is that _if_ you are looking
           | at the AI 's output and judging that it's bullshit, _and_ if
           | you can 't distinguish that output from your students'
           | satisfactory essays, then that by definition means that the
           | assignment was to produce bullshit.
        
         | lukev wrote:
         | I agree with some of this (particularly the conclusion that
         | better education methods are required) but lets be a bit
         | generous for a second.
         | 
         | The ability to write well is (or was) an important skill. Being
         | able to use correct grammar, to structure even a simple
         | argument, to incorporate sources to justify one's statements,
         | etc. Even if we're just talking about the level of what GPT 3.5
         | is capable of, that still corresponds to, let's say, a college
         | freshman level of writing.
         | 
         | Now, perhaps with the advent of LLMs, that's no longer true.
         | Perhaps in the near future, the ability to generate coherent
         | prose "by hand" will be thought of in the same way we think of
         | someone who can do long division in their head: a neat party
         | trick, but not applicable to any real-world use.
         | 
         | It isn't at all clear to me though that we're yet a the point
         | where this tech is good enough that we're ready (as a society)
         | to deprecate writing as a skill. And "writing bullshit" may in
         | fact be a necessary element of practice for writing well. So it
         | isn't self-evident that computers being able to write bullshit
         | means that we shouldn't also expect humans of being able to
         | write bullshit (at a minimum, hopefully they can go well beyond
         | that.)
        
         | fardo wrote:
         | > This will certainly require a lot of reworking of these
         | traditional curriculums that consist heavily of asking the
         | student to generate bullshit, but maybe that wasn't the best
         | way to educate students this whole time.
         | 
         | That process was invaluable for making normal bullshitters into
         | bullshit artists -- how will we train our elite human
         | bullshitters in the modern age of language models?
        
         | gmd63 wrote:
         | The point of school work isn't to generate something of value
         | externally, it's to generate understanding in the student. Much
         | like lifting weights or running are "bullshit" activities. You
         | haven't seen bullshit until you see whatever is produced by a
         | society with a voracious appetite that runs on "gimme what I
         | want" buttons and a dismal foundation of understanding.
         | 
         | The students bullshitting their way through course work are
         | like the lifters using bad form to make the mechanical goal of
         | moving the weight easier in the short term. They completely
         | miss the point.
        
         | titzer wrote:
         | > This will certainly require a lot of reworking of these
         | traditional curriculums that consist heavily of asking the
         | student to generate bullshit, but maybe that wasn't the best
         | way to educate students this whole time.
         | 
         | Just curious which discipline you have a grudge against here.
         | Because presumably disciplines are actually _disciplines_ where
         | someone working in the field for their entire career can spot
         | BS.
        
           | SpaghettiX wrote:
           | GP did not mention disciplines, and I don't think individual
           | disciplines are to blame.
           | 
           | The approach of evaluating students based on "text
           | generation" is very boring to study for, easy to fool (a
           | parent/guardian can do it, last year's students can pass you
           | an A grade answer, ChatGPT can generate it) and doesn't
           | prepare students for reality (making new things, solving new
           | problems).
        
         | [deleted]
        
       | zoba wrote:
       | Ezra Klein covers ChatGPT & Bullshit on an episode of his
       | podcast. The episode title is "A Skeptical Take on the A.I.
       | Revolution"
       | 
       | https://podcasts.apple.com/us/podcast/the-ezra-klein-show/id...
        
         | hoppyhoppy2 wrote:
         | Transcript:
         | https://www.nytimes.com/2023/01/06/podcasts/transcript-ezra-...
        
       | westurner wrote:
       | Prompt engineering:
       | https://en.wikipedia.org/wiki/Prompt_engineering
       | 
       | /? inurl:awesome prompt engineering "llm" site:github.com
       | https://www.google.com/search?q=inurl%3Aawesome+prompt+engin...
       | 
       | XAI: Explainable Artificial Intelligence & epistomology
       | https://en.wikipedia.org/wiki/Explainable_artificial_intelli... :
       | 
       | > _Explainable AI (XAI), or Interpretable AI, or Explainable
       | Machine Learning (XML), [1] is artificial intelligence (AI) in
       | which humans can understand the decisions or predictions made by
       | the AI. [2] It contrasts with the "black box" concept in machine
       | learning where even its designers cannot explain why an AI
       | arrived at a specific decision. [3][4] By refining the mental
       | models of users of AI-powered systems and dismantling their
       | misconceptions, XAI promises to help users perform more
       | effectively. [5] XAI may be an implementation of the social _
       | right to explanation _. [6] XAI is relevant even if there is no
       | legal right or regulatory requirement. For example, XAI can
       | improve the user experience of a product or service by helping
       | end users trust that the AI is making good decisions. This way
       | the aim of XAI is to explain what has been done, what is done
       | right now, what will be done next and unveil the information the
       | actions are based on. [7] These characteristics make it possible
       | (i) to confirm existing knowledge (ii) to challenge existing
       | knowledge and (iii) to generate new assumptions. [8]_
       | 
       | Right to explanation:
       | https://en.wikipedia.org/wiki/Right_to_explanation
       | 
       | (Edit; all human)
       | 
       | /? awesome "explainable ai"
       | https://www.google.com/search?q=awesome+%22explainable+ai%22
       | 
       | - (Many other _great_ resources)
       | 
       | - https://github.com/neomatrix369/awesome-ai-ml-dl/blob/master...
       | :
       | 
       | > _Post model-creation analysis, ML interpretation
       | /explainability_
       | 
       | /? awesome "explainable ai" "XAI"
       | https://www.google.com/search?q=awesome+%22explainable+ai%22...
        
         | titzer wrote:
         | I for one rue this commandeering of the word "engineering". No,
         | most activities involving stuff are not "engineering".
         | Especially flailing weakly from a distance at an impenetrable
         | tangle of statistical correlations. What a disservice we are
         | doing ourselves.
        
           | layer8 wrote:
           | Google lists the following meanings for the verb "to
           | engineer" (based on Oxford data):
           | 
           | 1. design and build (a machine or structure). "the men who
           | engineered the tunnel"
           | 
           | 2. skilfully arrange for (something) to occur. "she
           | engineered another meeting with him"
           | 
           | "Prompt engineering" is from meaning 2, not 1.
        
           | woah wrote:
           | "Engineers" were originally just eccentric noblemen who liked
           | to tinker around with engines. Kind of an awesome hobby if
           | you think about it, combining clockwork and fire in a cool
           | way. Hence the "eer" suffix implying that they are just
           | really into engines, like a "Mouseketeer" is someone who is
           | really into Mickey Mouse.
           | 
           | It didn't acquire its self-righteously gatekept meaning
           | implying having passed some kind of technical examination to
           | be allowed to draft plans for roads until much later.
        
       | more_corn wrote:
       | And therefore has demonstrated that 90% of modern life and work
       | are bullshit.
        
       | nashashmi wrote:
       | And someone else realizes the errors and corrects them. -1 & 1
       | equal 2
        
       | aatd86 wrote:
       | Having half of the title is too triggering. That's not fair to
       | the CS prof. Misleading.
       | 
       | But to comment on the article, although I've encountered several
       | falsities in chatgpt output, I'm still very surprised that it
       | could devise algorithms when I would give him the criteria.
       | 
       | And it would even output some code samples.
       | 
       | It shouldn't be underestimated as a programming aid.
        
       ___________________________________________________________________
       (page generated 2023-02-01 23:00 UTC)