[HN Gopher] ChatGPT: The Slickest Con Artist of All Time
       ___________________________________________________________________
        
       ChatGPT: The Slickest Con Artist of All Time
        
       Author : zdw
       Score  : 283 points
       Date   : 2023-02-03 13:22 UTC (9 hours ago)
        
 (HTM) web link (tedgioia.substack.com)
 (TXT) w3m dump (tedgioia.substack.com)
        
       | darkwraithcov wrote:
       | I'm envisioning a bifurcation of reality where some people live
       | in an entirely fact based world (or as close an approximation to
       | fact based as a human can objectively reach, aka the observable,
       | knowable universe) and some live in a complete fabrication, a
       | fantasy version carefully crafted by AIs. Now add Augmented
       | Reality to the mix, and it's a dystopian nightmare.
       | 
       | And I don't think the US political left will be immune to it as
       | much as they may think. While I agree that older Americans on the
       | right are highly susceptible to misinformation, and media
       | literacy is dismal among that demographic, younger people are
       | also prone to it. Just look at all the unhinged utter nonsense
       | that is wildly popular on TikTok.
       | 
       | The ability of ML models to authoritatively spout bullshit will
       | make gish gallops worse than they are now. It will also make echo
       | chambers even worse, as digital alternate realities will further
       | divide people. I mean, who wants to engage with those who
       | completely rejects that the sky is objectively blue, or that 2 +
       | 2 = 4? Well now they'll have articulate, authoritative responses
       | with works cited explaining why the sky is red, actually.
       | 
       | Who needs Big Brother when people choose the method of their own
       | subjugation eagerly and with dessert?
        
       | m3kw9 wrote:
       | ChatGPT is like that tipping point where things starts to get
       | wild. It really seem like a tipping point. Put another way, it
       | opens up a new graph and it set at zero
        
       | ThrowawayTestr wrote:
       | No matter how cool something is, there will always be people
       | saying it isn't that impressive. Perpetual motion could be
       | invented and there would still be people going "yeah sure, but
       | it's not a free energy machine so it's a scam"
        
       | VBprogrammer wrote:
       | Maybe I missed the memo but why isn't anyone impressed that a
       | computer can generate well formed prose in response to arbitrary
       | questions? It seems like we've completely leaped over that as an
       | achievement and are now arguing over how it's confidently wrong
       | or how there are emergent patterns in what it has to say. No one
       | is claiming it's a general intelligence but it's still amazingly
       | impressive.
        
         | skybrian wrote:
         | It's impressive and that's why we're talking about it, but it's
         | no longer news. The assumption is that you already know what
         | ChatGPT is, and now the conversation is about what it's good
         | for.
        
       | MichaelRazum wrote:
       | It seems somehow that Asimov got it right. The obvious next steps
       | are all about making it smarter but also implement the right
       | ethic rules...
        
       | seydor wrote:
       | Ah yes let's anthropomorphize a bunch of numbers, then name him a
       | con artist. This is going to be a thoughtful article
        
       | hospadar wrote:
       | I wonder if the biggest shortcoming of GPT right now is not that
       | it sometimes gets things wrong, or can't cite its sources, or
       | whatever - maybe it needs to learn when to say "I don't know the
       | answer to that question".
       | 
       | That's a pretty hard thing for most humans (and myself) to learn
       | to say, and I suspect GPT's training data (tha internet) doesn't
       | include a lot of "I'm not sure" language and probably does
       | include a lot of "I'm definitely sure and definitely totally
       | correct" language (maybe, I guess, no evidence to back up that
       | suggestion, I'm not sure).
       | 
       | Many of my favorite coworkers, friends, doctors, pundits are
       | trustworthy exactly because they work hard to not profess
       | knowledge they are unsure about. The reason (IMO) that Scott
       | Alexander is a jewel of the internet is because of the way he
       | quantifies uncertainty when working through a topic.
        
         | hooande wrote:
         | You're flat out incorrect about this. GPT _loves_ to say that
         | it doesn 't know things. Have you actually used it?
         | 
         | Don't take my word for it. Ask it: "How many people are wearing
         | tennis shoes in the United States right now?"
         | 
         | The response I got: "I'm sorry, I don't have the current data
         | on the number of people in the United States who are wearing
         | tennis shoes right now. It's not possible to determine this
         | number as it would require constantly monitoring the footwear
         | of every person in the country, which is not feasible. The
         | number would also vary greatly from moment to moment and would
         | be influenced by a number of factors such as time of day,
         | weather, and location."
         | 
         | No hallucinations, no bullshitting. Just "I don't know" and an
         | explanation of what it would take it find out. No idea why you
         | think it has trouble expressing a lack of confidence in its
         | answers.
        
           | redox99 wrote:
           | I've actually used it quite a bit and he's not "flat out
           | incorrect". I'd say you're the one wrong.
           | 
           | It might reply that when you ask it something that it can
           | deduce it can't know, but ask it something reasonable and it
           | will almost certainly hallucinate it instead of telling you
           | it doesn't know about it.
           | 
           | > What function in Unreal Engine makes an actor transparent?
           | 
           | > The function to make an actor transparent in Unreal Engine
           | is called "SetActorOpacity".
           | 
           | (There is no function called SetActorOpacity)
        
         | freejazz wrote:
         | >"I wonder if the biggest shortcoming of GPT right now is not
         | that it sometimes gets things wrong, or can't cite its sources,
         | or whatever "
         | 
         | It surprises me that people, here no less, seem to not realize
         | that chatGPT has literally no idea what it's doing or saying.
         | It's just trying to find the next word. It has no regard or
         | understanding for what the word means. In response to this,
         | people here just reflexively question what happens when a human
         | is thinking, but I don't see that as a basis to assume that
         | means chatGPT is thinking.
        
         | MagicMoonlight wrote:
         | It can't learn because it isn't intelligent. It's a markov
         | chain.
         | 
         | There is no way for it to make any decision at all. The
         | fundamental system design is that it guesses the word that is
         | most likely to be next. That's it. There's no concepts behind
         | whatever it outputs, it has no idea what it means.
        
           | skybrian wrote:
           | It's a next-word predictor like a Markov chain, but a Markov
           | chain couldn't do all the things ChatGPT does. ChatGPT has
           | learned lots of syntax-level patterns pretty well.
        
       | _fat_santa wrote:
       | I don't think GPT is a con, it's doing exactly what it was trined
       | to do. I think the problem is people put false confidence into
       | it. Because it appears to give correct information, ChatGPT has
       | been put on this pedestal by the non-tech world as being some
       | revolution. In fact it's not a revolution, they just figured out
       | how to build a chatbot that returns convincing statements that
       | sounds human, correct information is not it's' strong suit,
       | sounding smooth in a conversation is.
        
         | lizzardbraind wrote:
         | [dead]
        
         | CodeWriter23 wrote:
         | Similarly, a con man is doing what he is trained to do and
         | people put false confidence into the con man.
        
       | [deleted]
        
       | m3kw9 wrote:
       | Dismissive to tech that isn't mostly gimmick is dangerous.
       | Dismissive to crypto isn't dangerous. They other thread, someone
       | said AI is the beginning of web3.0, that made 50x more sense than
       | saying crypto is
        
       | unethical_ban wrote:
       | Does anyone else have issue with having to provide a phone number
       | to access it?
       | 
       | I signed up, verified email, and _then_ was told I needed to
       | verify with phone. This means, to me, (lest I read their TOS)
       | that they are associating any queries I make with my identity.
       | 
       | I can't wait for this tech to go open source and local on
       | devices.
        
       | partiallypro wrote:
       | ChatGPT is good for some things, it's not very good for others.
       | If you're writing a paper on a controversial topic, you're going
       | to get a one sided and biased answer; and it will be written like
       | a HS freshman. If you're asking something straight forward,
       | you'll have a better experience. Some people have said they've
       | gotten it to diagnose something, but I've tried and failed at
       | getting it to do such a thing. I do think there is a massive over
       | reaction to its usefulness, but it is a powerful tool to have,
       | nevertheless.
        
       | andrewmcwatters wrote:
       | See also:
       | https://en.wikipedia.org/wiki/Hallucination_(artificial_inte...
        
       | fullsend wrote:
       | Who are these people who see something amazing like this and
       | actually just can't process it? Their brains can't handle it.
        
       | xg15 wrote:
       | > _You don't worry whether it's true or not--because ethical
       | scruples aren't part of your job description._
       | 
       | I wonder if this might hit the core of the matter.
       | 
       | I think it's noteworthy that we use it both for tasks where it
       | should generate fiction ("Tell me a story about a dog in space")
       | and tasks where it should retrieve a factual answer ("What was
       | the name of the first dog in space?").
       | 
       | I wonder if ChatGPT actually distinguishes between those two
       | kinds of tasks at all.
        
       | _huayra_ wrote:
       | I can't tell what is worse now: the sycophantic ChatGPT hype
       | guys/gals who write articles "it's coming for all of our jerbs!",
       | or articles like this one that deliberately misuse ChatGPT and
       | then say "it's overhyped".
       | 
       | They're both missing the point.
       | 
       | Yes, ChatGPT can be tricked, confidently give wrong answers, but
       | it is still ludicrously useful.
       | 
       | It is basically like having an incredibly smart
       | engineer/scientists/philosopher/etc that can explain things quite
       | well, but for pretty much every field. Does this "person" make
       | mistakes? Can't cite their sources? Yeah this definitely happens
       | (especially the sources thing), but when you're trying to
       | understand something new and complex and you can't get the "gist"
       | of it, ChatGPT does surprisingly well.
       | 
       | I've had it debug broken configs on my server and router (and
       | explain to me why they were broken), help me practice a foreign
       | language I've slowly been forgetting (hint: "I would like to
       | practice $language, so let's have a conversation in $language
       | where you only use very simple words." -> ChatGPT will obey), and
       | help me understand how to use obscure software libraries that
       | don't have much documentation online (e.g. Boost Yap, useful but
       | with a dearth of blog / reddit posts about it).
       | 
       | Does it sometimes goof up? Yep, but it is _such an incredibly
       | useful tool_ nonetheless for the messy process of learning
       | something new.
        
         | feifan wrote:
         | A well-adjusted person should be able to signal some degree of
         | confidence about what they're saying ("I know that...", "I
         | think that...", "Maaaaaaybe...", etc). That's an anspect that
         | ChatGPT doesn't have.
        
           | tester457 wrote:
           | For the api appending such language is a waste of tokens,
           | therefore a waste of money. People should use it while
           | knowing it hallucinates all the time.
        
             | kibwen wrote:
             | In other words, everyone should assume that it's lying at
             | all times. Nobody should be using a product that both lies
             | to you and then lies to you about the fact that it's lying
             | to you.
        
               | Karunamon wrote:
               | Stating a falsehood without intent to deceive is just
               | incorrectness, not a "lie". Let's dial the hyperbole back
               | a few notches.
        
               | rhdunn wrote:
               | That would depend on a) how OpenAI are marketing it, and
               | b) what disclaimers/etc. they put in the UI.
        
               | kibwen wrote:
               | If the people running the platform are advertising a
               | product that produces truths while delivering falsehoods
               | and pathetically refusing to label their falsehoods as
               | falsehoods, that's a lie.
        
               | Karunamon wrote:
               | There is a bit right on the chat box above every
               | conversation that says, verbatim, "may occasionally
               | generate incorrect information".
               | 
               | Determining truth is not in the wheelhouse of a language
               | learning model and it is not defective for not doing so
               | in the same way a tablesaw is not defective for not
               | driving nails. Adjust your expectations accordingly.
        
               | kibwen wrote:
               | _> "may occasionally generate incorrect information"_
               | 
               | The points of this thread are 1) this is a dramatic
               | understatement; ChatGPT's output is not just occasionally
               | incorrect, but usually incorrect in some dimension, and
               | 2) in the absence of any fine-grained confidence value
               | for individual statements, you must pessimistically
               | assume that all statements are incorrect until proven
               | otherwise, which dramatically reduces the utility of the
               | tool compared to how its fanboys portray it as the second
               | coming of Christ.
        
               | Karunamon wrote:
               | The first point is a bald-faced assertion with only
               | anecdotal evidence, the second is a reduction to the
               | absurd. It is absurd because if you uncritically accept
               | the words of anything you read online, without, say,
               | validating them with common sense, your own experience
               | and knowledge, and so forth, the problem is with the
               | reader, not the words.
               | 
               | You and the author of this article are making this false
               | dichotomy where there is no middle ground between
               | "usually incorrect" (which is hyperbolic nonsense and
               | trivially falsified by five minutes of using it), and
               | "always correct" (which even your straw "fanboys" have
               | not done to my knowledge), and then using this dichotomy
               | to set up another one to pretend that the only way to act
               | on information read from a computer screen is either
               | uncritical acceptance or presume that it's bullshit.
               | 
               | Neither of these models are accurate and neither of them
               | have any relation to how people in the real world
               | generally interact with information online or with
               | ChatGPT.
               | 
               | Furthermore, your insistence on "labeling falsehoods" is
               | not something we can do accurately anyways, let alone in
               | the context of a language model which has no concept of
               | truth or falsehood in the first place! You are asking for
               | something completely unreasonable and I can't tell if
               | you're doing it out of ignorance or bad faith.
        
               | tester457 wrote:
               | It's not lying about the fact that it lies, they have a
               | disclaimer.
               | 
               | It's the user's responsibility to verify truth. This was
               | trained on the internet and everyone knows not to believe
               | everything you see on the internet. This isn't any
               | different.
        
               | kibwen wrote:
               | And you don't see how this makes it useless as a paid
               | product? Truth _is_ the product. An AI chatbot that makes
               | no pretensions to the truth might as well be a Markov
               | chain for all the good it does me. Are people really so
               | blinded by the song and dance of this mechanical monkey
               | that they can 't see through the ruse?
        
               | ArjenM wrote:
               | Was looking for the disclaimer comment, they do genuinely
               | mention on their page it's not fit for consumption,
               | although so do many services we often use.
        
         | bitsnbytes wrote:
         | FWIW I see it the same and have had the same experience.
         | 
         | The tech is there now to be utilized as an assistant. However
         | as the end user you better have enough knowledge to understand
         | the solution or answer it spits at you to fact check it.
         | 
         | The biggest danger I see is utilizing the tool for solutions or
         | answers by novices that don't understand enough to grasp the
         | solution and confirm if in fact the answer it gives is correct
         | or appropriate.
         | 
         | Simplistic example: A non techie manager trying to have it
         | create sql queries to get the information they are asking for ,
         | is asking for trouble. However in the hands of a sql developer
         | it could provide enough of the query build out to just require
         | minimal tweaks.
        
         | osigurdson wrote:
         | Agree. ChatGPT provides great results while doing quite a bit
         | of dreaming at the same time. The role of the human is to
         | separate dreams from reality.
        
         | chx wrote:
         | > It is basically like having an incredibly smart
         | engineer/scientists/philosopher/etc that can explain things
         | quite well, but for pretty much every field.
         | 
         | No it can not explain anything. It's a (stochastic) parrot,
         | nothing more.
         | 
         | > Yep, but it is such an incredibly useful tool
         | 
         | It is not. You are fooling yourself. You are building a castle
         | on a swamp.
        
           | vehemenz wrote:
           | ChatGPT tells me that Disney World's Magic Kingdom was built
           | on a swamp.
           | 
           | It's just a tool. You don't use a shovel to rake leaves, but
           | that doesn't make a shovel any less useful.
        
         | checkyoursudo wrote:
         | > It is basically like having an incredibly smart
         | engineer/scientists/philosopher/etc that can explain things
         | quite well, but for pretty much every field.
         | 
         | I have yet to see a single instance of this with any query that
         | requires even a modicum of expertise. I have asked it a lot of
         | questions about my field. I have asked it to summarize and
         | paraphrase and ELI5 all kinds of shit. Everything it produces
         | is either inane or a regurgitation of the input just with verbs
         | and prepositions changed.
         | 
         | I would like to see a nice example, input and output, of this.
         | I really would.
        
           | shawabawa3 wrote:
           | First thing that came to mind:
           | 
           | > Explain how git works in very simple english
           | 
           | Git is a version control system for software development that
           | helps track changes made to the source code over time. It
           | allows multiple developers to collaborate and make changes to
           | the same codebase without interfering with each other.
           | 
           | When you start a project in Git, you create a "repository"
           | that stores all the files and their changes. You can then
           | "commit" changes to the repository, which is like saving a
           | snapshot of the code at a certain point in time. You can also
           | "push" the changes to a remote repository on a server, so
           | that other people can access and work on the code.
           | 
           | If someone else makes changes to the code, you can "pull" the
           | updated version of the repository to your local machine and
           | "merge" the changes with your own code. Git also allows you
           | to create "branches" of the code, so that you can work on
           | different features or fixes without affecting the main code.
           | 
           | By using Git, you can easily revert back to previous versions
           | of the code, compare changes between different versions, and
           | track who made which changes
        
         | [deleted]
        
         | evo_9 wrote:
         | Yeah, it's fascinating to watch, right?
         | 
         | As others have pointed out -- what did we expect of 'AI'
         | created in our image? Aren't we all just bullshit artists of
         | varying degree?
         | 
         | HN is the perfect place to learn how much BS you are - I DARE
         | any subject matter expdrts on here to post on any topic and I
         | guarantee another expert will step in and tell you why you're
         | wrong, or at best, nitpick you apart.
        
         | groby_b wrote:
         | In all the fields I've tried
         | (software/music/epidemiology/geosciences/mathematics) ChatGPT
         | can mostly explain things at the lay level, but even there
         | often gets things wrong that without more expert guidance, you
         | will not notice.
         | 
         | Experts in each of these areas were consistently better at
         | giving a correct explanation.
         | 
         | ChatGPT is great when occasionally being wrong doesn't matter.
         | It's still pretty good when you can't be wrong, but you're an
         | expert. But in any "no mistakes, I need to talk to an expert"
         | situation, ChatGPT is setting you up for failure.
         | 
         | That's the key part - the information is
         | unreliable/untrustworthy. As long as that is not a problem,
         | it's awesome.
         | 
         | The problem is that it presents wrong data with absolute
         | confidence in a very convincing voice. (Hence people calling it
         | a con artist). It'll become a huge problem as it makes inroads
         | in mis/disinfo campaigns.
         | 
         | (The usual tech problem. Don't let large scale interests touch
         | it, and you'll be OK)
        
         | BolexNOLA wrote:
         | >but it is still ludicrously useful.
         | 
         | Wrote me a script to quickly convert SRT to VTT files as I edit
         | and prep for uploading online. I haven't done anything akin to
         | "coding" since I took a very basic class in college and messed
         | around with HTML/CSS. Shaves several steps off of every video I
         | do (I'm in a high volume world here).
        
         | [deleted]
        
         | jacobjjacob wrote:
         | The latter group exists to counter the former. If folks weren't
         | making some of these over-the-top claims, then there wouldn't
         | be the claims to debunk.
        
         | belter wrote:
         | Sometimes I think ChatGPT was just some twisted, treacherous,
         | indirect plot, to find a way to finally discredit Jordan
         | Peters.
        
         | random_upvoter wrote:
         | I use ChatGPT not because it is so great at solving tricky
         | technical question, but because Google has become so incredibly
         | shitty at it. Google's response these days is ten YouTube
         | videos and a random mix of web posts dating between 2005 and
         | 2023 for a piece of software that I'm using the latest version
         | of. I honestly don't even bother trying anymore.
         | 
         | ChatGPT is a mixed bag. Sometimes it nails it, but I've also
         | seen it hallucinate complete nonsense solutions, for instance
         | in Davinci Fusion using a node type that doesn't even exist in
         | the program.
         | 
         | As for "incredibly intelligent philosopher", I don't even find
         | it an interesting "person" to talk to. Its responses are
         | pedestrian, pedantic and boring. A question like "if God is
         | good then why evil in the world?" is met with a summary of the
         | wikipedia article on the problem of evil. I can read a
         | Wikipedia article, thank you.
        
         | photochemsyn wrote:
         | It is very useful but there's definitely a constant issue with
         | quality-of-results, but I've found a pretty good workflow,
         | involving jumping back and forth between search engine results
         | and ChatGPT results.
         | 
         | Fundamentally, it's:
         | 
         | 1) Ask ChatGPT a series of questions, going from the general
         | down to the more specific. For example, 'Give me a list of
         | options for setting up a headless Raspberry Pi as a Nginx
         | server from scratch using only command line tools'. ChatGPT
         | will produce a decent list of options.
         | 
         | 2) Run that list of options in truncated form, just the
         | keywords, through Google or other search engine that allows for
         | a 'verbatim' search. Set to 50 or more results per page to get
         | a decent spread of results at a glance. Get the more specific
         | information for various options.
         | 
         | 3) Run that more narrowed information back through ChatGPT to
         | see what it produces. At this point you might be pretty close
         | to the correct specific information you need, but generally I
         | run it through search again just to be sure.
         | 
         | For example, this way I was able to quickly discover my
         | headless RPi setup with the latest Raspberry Pi OS wasn't
         | allowing me to ssh into it over wireless because the RPi
         | engineers did away with default user/password settings for
         | security reasons, so I had to manually include the
         | 'userconf.txt' file in the boot directory on the SD card. Also
         | it was very helpful with what needed to go into the
         | wpa_supplicant.conf file.
         | 
         | Trying to find stuff like that out by search alone, in the
         | past, would have probably taken at least 10X as long.
         | 
         | I wouldn't ever blindly trust what it suggests, however.
        
         | [deleted]
        
         | pdonis wrote:
         | _> It is basically like having an incredibly smart engineer
         | /scientists/philosopher/etc that can explain things quite well,
         | but for pretty much every field._
         | 
         | No. You've fallen for exactly the con the article is
         | describing.
         | 
         | ChatGPT _knows nothing_. It is very good at generating text
         | that _seems plausible if you know nothing about the subject
         | matter, but is not reliable as an actual source of
         | information_.
         | 
         | If you happened to get information from it that turned out to
         | be ok, you were lucky. Don't expect your luck to last.
        
           | rayval wrote:
           | I was surprised to find ChatGPT does a better job of
           | translating than Google translate, at least for one difficult
           | task I needed to get done.
           | 
           | I have been a Google Translate user for more than a decade. I
           | use it a dozen or more times a day for multiple languages. It
           | is a very useful tool.
           | 
           | I recently had to translate a passage that included this
           | phrase: Liu Liu Da Shun
           | 
           | Google Translate gave me: Liuliu Dashun (basically no
           | translation, just a transliteration or romanization)
           | 
           | Bing Translator: Six Six Dashun (a little bit better because
           | Liu  does mean six in some contexts.
           | 
           | Baidu: Liu Liu (basically no translation)
           | 
           | Naver: Six or Six Dashun (partial translation, plus added an
           | incorrect particle)
           | 
           | Only ChatGPT gave the correct answer: "Everything going
           | smoothly"
           | 
           | There is a cultural context that is essential to the
           | translation, which apparently only ChatGPT has. It turns out
           | that the number 6 is a lucky number in Chinese. It has the
           | meaning of being "auspicious". In Chinese culture, there is
           | an idiom called "Liu Liu Da Shun  (liu liu da shun)", which
           | means "Everything comes in a smooth way" or more naturally
           | "Everything going smoothly".
           | 
           | BTW, after seeing the ChatGTP results, I found this
           | contextual info using Google search, in a website
           | http://www.ichineselearning.com
           | 
           | It is ironic that Google search "knows" (helps you find) the
           | cultural context but Google Translate apparently does not
           | leverage this info.
        
             | autokad wrote:
             | > It turns out that the number 6 is a lucky number in
             | Chinese. It has the meaning of being "auspicious".
             | 
             | isn't that number 8? (not number 6)
        
               | naniwaduni wrote:
               | Cultures can have multiple lucky numbers!
        
             | bo0O0od wrote:
             | Seems far more likely to me that ChatGPT knows the meaning
             | of specific idioms rather than knowing the cultural context
             | of that number and using it in the translation.
        
             | LarryMullins wrote:
             | I wonder, does it get the meaning of "10-4" when
             | translating English to other languages?
        
           | speed_spread wrote:
           | There's lots of real knowledge encoded in ChatGPT and it can
           | recombine and regurgitate it in enough ways that I'd be okay
           | saying that it knows stuff. The real problem is that it
           | doesn't know its limits and will make shit up instead of
           | saying "I don't know". How to reliably detect and enforce
           | these limits probably requires a completely different
           | approach to AI.
        
             | HarHarVeryFunny wrote:
             | I think ultimately any truly intelligent entity needs to
             | have agency - to be able to interact with the world and
             | test truth (prediction correctness) for itself. Without
             | this you're really just an expert system - a bunch of
             | canned data/rules.
             | 
             | The strength of ChatGPT over a GOFAI expert system like Cyc
             | is how broad it's knowledge base is, but the weaknesses are
             | that is doesn't know whether this "knowledge" (being self-
             | deduced from the training set) is true or not, nor does it
             | have any reliable way to combine facts/sources, since those
             | rules are also self-derived and unverified. I'm sure some
             | of this could be mitigated by more human curation (which
             | seems to be the path OpenAI is going down), but then it
             | seems it is just an expert system (maybe this is all people
             | are looking for anyway).
        
           | lamontcg wrote:
           | Yeah, ChatGPT is exploiting how bad humans (or at least
           | Americans) are at detecting sociopathic liars and bullshit
           | artists.
           | 
           | It is consistent with how awful our business management class
           | is along with our politics and relationships.
        
           | tester457 wrote:
           | > If you happened to get information from it that turned out
           | to be ok, you were lucky. Don't expect your luck to last.
           | 
           | For creating bash scripts and english to code it remains
           | remarkably consistent. No need to be a master of multiple
           | languages syntaxes now.
        
             | kibwen wrote:
             | Of all the languages one could ask ChatGPT to generate,
             | bash would be nearly the bottom of the list. The number of
             | subtle footguns in bash are through the roof.
        
           | westoncb wrote:
           | > If you happened to get information from it that turned out
           | to be ok, you were lucky.
           | 
           | This is just saying that sometimes ChatGPT is right and
           | sometimes it's wrong, with an implicit stance on the relative
           | frequencies.
           | 
           | In my own experience it would be more accurate to say you're
           | unlucky if it gets it wrong--but then much of this hinges on
           | having a good sense of what it's going to be good at / useful
           | for.
           | 
           | (The other thing I see people miss all the time when
           | assessing its correctness: it is somewhat "on demand"--it may
           | include bits that are vague and only correct-ish, but if you
           | point out an error or request refinement you can generally
           | get it to clear/tighten things up. It's a bit like using a
           | Google Maps-like zooming UI, but in addition to being
           | variably abstract it's always approximating: the _potential_
           | for error is ever present.)
        
             | pdonis wrote:
             | _> This is just saying that sometimes ChatGPT is right and
             | sometimes it 's wrong_
             | 
             | No, it's saying that ChatGPT is not reliable. It's not
             | reliable because getting things right or wrong is not part
             | of its design at all. All it does is generate text that
             | matches a pattern. There is nothing anywhere in it that
             | filters or evaluates text based on any semantic connection
             | with anything else.
        
               | westoncb wrote:
               | > No, it's saying that ChatGPT is not reliable
               | 
               | That's another way of saying sometimes it is right and
               | sometimes it is wrong...
               | 
               | > All it does is generate text that matches a pattern
               | 
               | This is about as meaningful as saying, "all things are is
               | collections of atoms".
               | 
               | We are barely beginning to explore the implications of
               | these pattern-extending structures: their most
               | interesting features _emerge_ from their structure,
               | cannot be simply derived from the fact that they do
               | pattern continuation in the same way you 're not gonna
               | derive the concept of friendship from analyzing chemical
               | bond structure.
        
           | Throwaway78389 wrote:
           | How many hours you spent trying ChatGPT out? I spent at least
           | high tens, maybe even hundreds. You're absolutely wrong. Yes,
           | it hallucinates, yes its wrong about obscure topics - but
           | calling having success with it _luck_ is absolutely wrong. It
           | 's very consistently good. Especially about things like
           | programming, physics, math - and now I'm using it as my
           | teaching assistant for my pilot training, it's perfect (and I
           | can very simply verify the answers are good with my FAA Pilot
           | Handbook, don't worry).
        
             | mrtranscendence wrote:
             | > Especially about things like programming, physics, math
             | 
             | I _routinely_ find ChatGPT giving me completely made-up
             | APIs and incorrect explanations when it come to
             | programming. And I haven 't found it much better with math.
             | Sorry, I don't buy it. Maybe it's good at training pilots,
             | and if so great, but it's wrong enough _for me_ that it 's
             | hard to trust in general.
        
             | pdonis wrote:
             | _> I can very simply verify the answers are good with my
             | FAA Pilot Handbook_
             | 
             | Thank you for agreeing with my point.
             | 
             | If you need to check the answers with your FAA Pilot
             | Handbook, wouldn't it be simpler just to _read_ the FAA
             | Pilot Handbook? That handbook, unlike the text generated by
             | ChatGPT, was written by a process that was aware of the
             | semantic relationship between the text being written and
             | the things in the world that the text was referring to.
             | That is what makes the FAA Pilot Handbook a reliable source
             | of information about flying. ChatGPT is not any such
             | process. That 's why it is _not_ a reliable source of
             | information--as you agree, since you need to check what it
             | says about flying with the FAA Pilot Handbook.
        
               | Throwaway78389 wrote:
               | I don't think I agreed with anything you said.
               | 
               | I work with all available material in many different ways
               | (Anki cards, videos, different explanations of the same
               | thing, etc), and ChatGPT is another way to learn and help
               | me generate learning material. For example I have it ask
               | me questions like a tutor would. Or I ask various
               | questions when I'm unsure about the wider context - e.g.
               | it provides much more about the underlying physics than
               | the Pilot Handbook itself. If I don't understand, I can
               | ask for clarification, or an explanation like I am 5.
               | 
               | Reading the Pilot Handbook is a big part of learning but
               | being limited to it would be hard. I'm very happy about
               | having Chatgpt available.
        
               | hn_throwaway_99 wrote:
               | > If you need to check the answers with your FAA Pilot
               | Handbook, wouldn't it be simpler just to read the FAA
               | Pilot Handbook?
               | 
               |  _No_ , absolutely not. It's much easier to verify a
               | couple specific pieces of information that you're unsure
               | of then go hunting through a large corpus of information
               | trying to pick out the bits that are specific to what you
               | want.
               | 
               | I've used ChatGPT across lots of different areas and I
               | find it incredibly useful. I'm not blindly trusting what
               | it spits out, but it's pretty simple to verify what it's
               | saying. While I definitely _do_ have concerns about the
               | impacts of ChatGPT on a societal level, and what will
               | happen when so much computer-generated content can flood
               | the Internet, but, at a personal level, the complaint
               | that ChatGPT  "bullshits with confidence" is not really
               | that much of a problem for what I use it for.
               | 
               | Edit: To give a specific, real-world example, there was a
               | post recently about using ChatGPT to replace a SQL
               | Analyst. Now, ChatGPT definitely will and did churn out
               | wrong answers, but it was incredibly useful to use as a
               | starting point for some complex queries. When it fails,
               | it tended to fail in pretty obvious ways, and to the
               | complaint that it can fail in more subtle ways that look
               | correct, I've certainly dealt with tons of human-
               | generated queries that had the same issues. Are those all
               | useless?
        
             | CSMastermind wrote:
             | Perhaps you and the poster are approaching your evaluations
             | from different points of view. I've found that if I set out
             | to break ChatGPT I can very easily do it. If my goal is to
             | look for mistakes or find a failure case it's almost
             | trivial to do so.
             | 
             | At the same time if I'm looking for success I normally find
             | it.
             | 
             | Essentially if you work cooperatively with the tool then
             | you'll find it useful, if you are antagonistic towards it
             | you can also have success in breaking it.
        
               | midasuni wrote:
               | I asked it to write some simple code do do a task. It
               | confidently told me to use a library and use some
               | functions in it.
               | 
               | Couldn't get it to work. Couldn't get any examples out of
               | google of it being sued that way. Eventually looked
               | through the code of the library and found that while some
               | functions worked int he way chatgpt was trying, the
               | functions it had selected didn't work, didn't support
               | those arguments, and never had.
        
           | msm_ wrote:
           | I think that's unnecessarily pessimistic view. I have
           | recently started using ChatGPT for mid-hard questions (when I
           | can't get an answer immediately using Google), mostly "how do
           | I" questions. It lies of dreams up fake answers about half of
           | the time, but this still means that 50% of time I get my
           | solution without digging through obscure manuals.
           | 
           | > It is basically like having an incredibly smart
           | engineer/scientists/philosopher/etc that can explain things
           | quite well, but for pretty much every field.
           | 
           | I like to think about it like asking questions to a well read
           | but dishonest student during an oral exam. They know some
           | things, and try really hard to answer every question, even
           | resort to making things up in hope of getting a passing
           | grade.
        
             | bjhartin wrote:
             | It seems to me the utility of the 'good' 50% depends
             | entirely upon your ability to recognize it.
             | 
             | How do you know when to dig through those obscure manuals?
             | 
             | It seems to me that it'd be more useful when:
             | 
             | a) I don't care whether the text says things which are not
             | true
             | 
             | Write me a poem about ...
             | 
             | b) I care, and will have to verify it's results, but that
             | cost is worth it
             | 
             | Generate some test data ...
             | 
             | I think we're at a very dangerous intersection between an
             | apparent decline in American's ability to detect nonsense
             | and an automated way to create convincing nonsense.
        
             | jacobmischka wrote:
             | So it's a coinflip whether it's giving you correct
             | information or something completely made up? And now you're
             | not digging through the obscure manuals to actually verify?
             | Seems objectively harmful.
        
               | stale2002 wrote:
               | Actually, that is extremely beneficial.
               | 
               | For example, I now use GPT3 for every single code
               | debugging problem that I have.
               | 
               | I input the bug/problamatic code into the service, ask it
               | for an answer, and then I take the code and see if it
               | fixes my problem I was having.
               | 
               | It doesn't work half the time, but thats fine. Then I
               | just figure out the bug the normal way.
               | 
               | But the other half the time, it immediately solves my
               | problem, and then I run my code and it works/ does what I
               | want.
        
           | kvetching wrote:
           | So frustrating to see people complain that it provides wrong
           | information. With this logic it can never know anything. It
           | is not made for querying. It's made to complete the text you
           | provide it, with some minor modifications to help it answer
           | questions. It does know the most common patterns of
           | characters in the internet which implicitly contain
           | knowledge.
        
             | undersuit wrote:
             | Not that I see much evidence of what I'm about to assert in
             | ourselves, but you should be able to correct ChatGPT's
             | knowledge if it knows things and isn't just a fancy parrot.
        
             | pdonis wrote:
             | _> With this logic it can never know anything._
             | 
             | That's correct: ChatGPT does not know anything. That's not
             | what it's built to do.
             | 
             |  _> It does know the most common patterns of characters in
             | the internet which implicitly contain knowledge._
             | 
             | No, it doesn't. It knows patterns of text in its training
             | data, but it knows nothing about the _semantics_ of the
             | text--its relationship to other things in the world, which
             | is what is involved with any text that contains knowledge.
             | That is why ChatGPT is not reliable as a source of
             | information.
        
               | bglazer wrote:
               | I share your skepticism of LLM's output but I don't think
               | it's fair to say they know nothing about semantics. I
               | think it's still an open question as to what degree LLM's
               | encode a coherent world model. Also you can just ask
               | chatgpt about objects and their relationships and it gets
               | the answer right way more often than you'd expect by
               | chance, so it has some understanding of the world. Not
               | good enough for me to trust it though
        
         | brookst wrote:
         | Thank you. Tons of noise because people are mistaking a
         | statistical tool for a sentient consciousness.
         | 
         | ChatGPT has no more interest in running a confidence game than
         | it does in helping children learn critical reasoning. Which is
         | to say none, zero, zilch in both cases. It has as much volition
         | and desire as a calculator. It's just a super impressive and
         | useful calculator.
        
         | rvz wrote:
         | Well an AI that can't explain itself transparently makes it
         | untrustworthy for more serious use and applications since
         | ChatGPT it is being celebrated as a great sophist. There is
         | nothing new in this, other than 'train more data on the
         | internet'.
         | 
         | The hype in ChatGPT reminds me of the short-lived hype of
         | Clubhouse; the end result is the same: The bigger players will
         | come in to surpass it, just like they did with GPT-3.
        
           | vehemenz wrote:
           | I wonder. If the bigger players can make an AI good enough to
           | be indistinguishable from human (but not quite), then
           | wouldn't the AI slowly and incestuously train itself to
           | become dumber, with nothing but AI-generated content left to
           | "feed it?"
        
         | sampo wrote:
         | > Can't cite their sources?
         | 
         | Has there been research on neural nets who could cite their
         | sources? In common cases, thousands or millions of items in the
         | training set have contributed to the weights in the pathways
         | that activate for a certain input. But maybe for some inputs,
         | only a small number of items in the training set contributed
         | strongly to the weights on the pathways the are decisive to the
         | output? Could you design a data structure that would keep this
         | information in the neural net?
        
           | etrautmann wrote:
           | I would imagine that citing sources would look more like
           | finding sources that have a minimum distance to the output in
           | a latent space representation? I don't think it'll make sense
           | to track how each connection weight changed over time due to
           | various inputs.
        
           | yellow_postit wrote:
           | I'd have thought the short lived Galatica from Meta would
           | have demonstrated more capability on this need given it was
           | explicitly positioned as being to help Researchers.
           | 
           | I wonder if this is a special case for Explainable Models [1]
           | -- once you select an answer, another model helps describe
           | and source the output.
           | 
           | [1] https://en.m.wikipedia.org/wiki/Explainable_artificial_in
           | tel...
        
           | jesterman wrote:
           | perplexity.ai/ specifically uses web sources to construct
           | natural language.
           | 
           | I believe it's going to be in Bing soon?
        
             | sampo wrote:
             | > perplexity.ai/ specifically uses web sources to construct
             | natural language.
             | 
             | I am going to guess that even though it uses the GPT
             | language model, it works fundamentally differently than
             | ChatGPT.
             | 
             | My guess is, Perplexity AI is a traditional search engine,
             | that first finds a list of links that match your questions.
             | And then it feeds text from those links to the GPT language
             | model to write a summary of the results.
             | 
             | Whereas ChatGPT produces text based only on the trained
             | deep neural net.
             | 
             | Maybe I am guessing wrong how Perplexity AI works. Maybe it
             | works in a different manner.
        
             | mynameisvlad wrote:
             | It still has the exact same issues. I asked it to tell me
             | about "Microsoft layoffs before 2022" and it confidently
             | stated that Microsoft had no layoffs before 2022, with one
             | of its sources even talking about the previous layoffs in
             | 2009.
        
         | karpierz wrote:
         | I don't know that it's an "incredibly smart" anything; I think
         | it'd be better described as incredibly well-read, with poor
         | discretion.
         | 
         | But there's definitely uses for a tool that produces a lot of
         | boilerplate content that a human can then proofread (ex:
         | writing cover letters, filling out unstructured paperwork,
         | etc.)
         | 
         | Edit: I personally underestimated the use-case, because the
         | "more optimal" thing to do when auto-filling out paperwork
         | would be to define a formal data-structure with annotations for
         | each field, and then have a system read annotations and spit
         | out autosuggestions. But this is a lot of overhead.
         | 
         | If you can have a system that takes in arbitrary text, parses
         | it, and fills it out with pretty good guesses, then that can be
         | much more viable than the previous approach, as it doesn't
         | require an overhaul of, say, the way the US bureaucracy
         | generates paperwork. It's all on the consumer-end, the producer
         | doesn't need to cooperate.
        
           | VBprogrammer wrote:
           | I dunno, I recently learned about how the Toyota Synergy
           | Drive system works in detail. Despite being the type of
           | person who'd be interested in that kind of thing I thought it
           | was not generally well understood, I've only come across one
           | website which described it in detail.
           | 
           | I asked ChatGPT about it and it game me a fairly surface
           | level answer. I then asked it for more details and it went
           | into detail far beyond what the average person would be able
           | to tell you. I thought that was quite impressive for a fairly
           | obscure bit of mechanical engineering.
        
             | andrepd wrote:
             | Well it's essentially a (good) search engine then, isn't
             | it? Somebody somewhere wrote that description, and chatgpt
             | understood your query and fetched it.
        
               | dsr_ wrote:
               | A good search engine gives you references you can read,
               | because they exist.
        
             | bamboozled wrote:
             | Because it's really trained off datasets you'd have access
             | too via search engine, if you couldn't find the info
             | yourself, it's likely just making a lot of the answers up,
             | convincingly
        
             | karpierz wrote:
             | Out of curiosity, I'd have the following questions:
             | 
             | 1. Would you be able to notice if it made something up,
             | given that it's going beyond the detail that you know?
             | 
             | 2. Did it have any details that aren't covered in:
             | https://en.wikipedia.org/wiki/Hybrid_Synergy_Drive
        
               | hooande wrote:
               | what exactly do you think its going to make up? "The
               | Hybrid Synergy Drive runs on apples and fairy dust"? I
               | doubt that OP wants to build one of the engines, they
               | just want a concise explanation.
               | 
               | I've seen gpt hallucinate all kinds of things. But very
               | rarely for something like explaining a process or system.
        
               | svachalek wrote:
               | It writes things that sound reasonable in context. So it
               | will likely give something that sounds like a Popular
               | Mechanics article about how some kind of machine works,
               | but it may not really be this particular machine that you
               | are asking about, or any actual machine in existence.
               | That's the tricky thing about it, if it actually said
               | "fairy dust" you know you can stop reading, but what it
               | actually writes typically needs some actual understanding
               | of the subject to refute.
        
             | webstrand wrote:
             | In my experience from asking it questions about things that
             | I know well, the deeper you get into the weeds with it, the
             | more likely it is to invent some kind of extremely
             | plausible-sounding but incorrect answer. How confident are
             | you that what it told you is actually correct?
        
               | happymellon wrote:
               | So it's a junior dev that reads lots of blogs?
               | 
               | This sounds like a lot of interviews I've conducted over
               | the years.
        
               | tempest_ wrote:
               | That is the one thing I think about when people say "I
               | used it to learn X and it was very good" but when people
               | are learning they are not in a position to to judge if
               | what they learned is correct.
               | 
               | A lot of other stuff is just regurgitation of a wiki
               | article.
               | 
               | Still it is very interesting and definitely a step
               | forward.
        
               | beepbooptheory wrote:
               | And then we must remember that wiki articles themselves
               | can also be quite wrong.
        
               | broast wrote:
               | That's what I like to use it for. To come up with
               | plausible answers for questions that people don't have
               | correct answers for yet, and then probe the reasons
               | behind it. It can be very creative and insightful in such
               | uses.
        
               | logifail wrote:
               | > It can be very creative and insightful in such uses
               | 
               | I once worked with a very smart woman who at that time
               | worked in marketing for the airside retail stores at a
               | large international airport.
               | 
               | If anyone from outside her immediate sphere asked her
               | what she did, she would typically say "I tell lies for a
               | living".
        
               | webstrand wrote:
               | Yep, this is how I've been using it too. The responses I
               | get are often good starting points for further
               | investigation. Often it comes up with better terms to
               | find the answer that I am seeking than I had originally.
        
             | swatcoder wrote:
             | That's exactly the wrong sort of thing to turn to ChatGPT
             | for. In probing it for more detail, you're giving it
             | increasingly helpful hints on how to answer your questions
             | in a way that makes you satisfied, regardless of accuracy.
             | That's what it's optimized to do. And because you're
             | working from a humble and curious understanding of the
             | topic yourself, it can and will make up all sorts of things
             | that make you believe its insightful because you _want_ to
             | believe it.
             | 
             | It's the same technique a stage psychic uses to demonstrate
             | that they knew all about your late uncle and his peculiar
             | love of ducks.
        
               | hansvm wrote:
               | If you're able to validate the results though (so they
               | logically fit together, or with that newfound vocabulary
               | and pseudounderstanding can you search for that
               | information elsewhere), is that really such a bad thing?
               | I've been doing exactly that to quickly ramp up on some
               | new numerical algorithms I previously didn't know
               | existed.
        
               | naniwaduni wrote:
               | That's a really big if.
        
               | hansvm wrote:
               | Why? It's the same standard you need to apply to anything
               | else.
        
               | darkwater wrote:
               | GP stated that they asked ChatGPT about a topic they
               | already were familiar with, from other sources. And that
               | the answer made sense.
        
               | Retric wrote:
               | ChatGPT is optimized to _seem_ to make sense, it's like
               | talking to a world class con artist you won't notice it's
               | talking nonsense. Thus is subtly dangerous in ways that
               | aren't obvious in the beginning.
               | 
               | Sure it seems great at writing boilerplate code,
               | unfortunately it's really good and writing the kind of
               | bugs that you don't notice. This is maddening in a
               | completely novel way, like working with an actively
               | malicious autocomplete.
        
               | 2devnull wrote:
               | You can verify it's accuracy. The metaphor of "con
               | artist" is a poor one. It has no adversarial interest. A
               | con artist does. Furthermore, con stands for confidence.
               | A con artist manipulates confidence. Gpt and other
               | language models are "what it says on the tin."
               | 
               | Not understanding the fallibility of LLM is not the fault
               | of the technology. It's the user's fault.
               | 
               | Not at all like a con artist. More like a tool that can
               | be dangerous in the hands of the uneducated. That is a
               | very large set of human tools, from nuclear reaction to
               | pharmaceuticals to the automobile. Perhaps we just need
               | "drivers ed" for large language models.
        
               | Retric wrote:
               | The problem is needing to verify accuracy makes it a
               | really bad search engine replacement. A good con artist
               | is generally truthful, they just know how to seem
               | truthful even when they lie which exactly the issue with
               | trusting ChatGPT etc. You can't tell if the model is
               | telling bullshit unless you already know the truth.
               | 
               | This doesn't make them useless in general, just a poor
               | fit if you can't easily fact check the results.
        
               | je42 wrote:
               | Easy, you use a second source and double check.
               | 
               | In google you would go to another search result.
               | 
               | For ChatGPT you actually need to verify via Google or
               | actually try it when it is code.
        
               | Retric wrote:
               | What's the point of using ChatGPT when searching if you
               | still need to verify the information through another
               | search?
        
               | 2devnull wrote:
               | Good point. I don't think they will be a good search
               | engine replacement, but maybe a powerful supplement for
               | some users. And honestly with SEO and other issues even
               | search engines can be manipulated in ways that an average
               | user might have difficulty detecting.
        
               | huimang wrote:
               | Conversely I asked it some simple questions about some
               | songs' harmonic structures, and music theory, and it got
               | things quite wrong.
               | 
               | There's already an example of it in the article where it
               | incorrectly stated the tritone of F was Bb.
        
               | yellow_postit wrote:
               | To OP's point what's with the gate keeping on "right" or
               | "wrong" ways to use it?
               | 
               | It's a tool. It has flaws. It can be useful and if it is,
               | have at it.
        
               | b3morales wrote:
               | If there was a hammer that, held one way, drove nails
               | perfectly in one blow, and held another, made it look
               | like the nail was driven but actually broke it right at
               | the junction so that the work pieces weren't fastened...
               | I'd say that the second way was the wrong way to use that
               | hammer.
        
               | efitz wrote:
               | Not only wrong, but dangerous, because nails are often
               | used to fasten structural elements of houses, and
               | incorrect but hard-to-detect flaws like this could result
               | in collapse.
               | 
               | Similarly, if ChatGPT gives you an answer high in
               | truthiness but low in accuracy, it could negatively
               | impact you, whether loss in credibility if you repeat
               | nonsense in front of someone knowledgable, or even worse
               | if you use the incorrect knowledge to try to solve a real
               | world problem.
        
               | naniwaduni wrote:
               | It's a tool whose flaw is that sometimes it secretly does
               | the exact opposite of what you're trying to do with it in
               | a way that's only detectable if you didn't already need
               | it.
               | 
               | It's like Babelfish, but for monoglots.
        
               | Retric wrote:
               | The issue is that approach maximizes its flaws. It's like
               | shooing yourself in the foot with a nail gun, that's
               | objectively the wrong thing to do.
        
             | enragedcacti wrote:
             | > Despite being the type of person who'd be interested in
             | that kind of thing I thought it was not generally well
             | understood
             | 
             | Its an extremely well understood piece of technology and
             | has been for decades outside of Toyota. Here is a 2005
             | paper that accurately models the fundamental behavior of
             | the Prius HSD [1]. There are dozens of pages that explain
             | it in various detail, including ones with live simulators
             | of the power split device. [2] There are tons of forum
             | threads of amateur Prius owners chatting about the
             | internals.
             | 
             | I don't say this as a dunk, but to point out that a
             | person's perception of chatGPT's capabilities are heavily
             | influenced by their familiarity with the domain.
             | 
             | [1] https://www.engr.uvic.ca/~mech459/Pub_References/015009
             | 79.pd...
             | 
             | [2] http://eahart.com/prius/psd/
        
               | VBprogrammer wrote:
               | I think you and I have different definitions of not
               | generally well understood. I would be impressed if more
               | than 1 in 1000 people could describe it to the level of
               | "it's a planetary gearbox with 2 electric motors used as
               | inputs and outputs to control the effective gear ratio".
        
               | logifail wrote:
               | > I would be impressed if more than 1 in 1000 people
               | could describe it
               | 
               | Q: What use is it attempting to measure ChatGPT's
               | capabilities on something vs the average person on the
               | street, or vs the average user with access to google.com?
               | What does that actually tell us?
               | 
               | Maybe I'm showing my age, but if one wanted to learn or
               | discover something, wouldn't one first typically try and
               | find a reliable source/expert in that field or subject?
        
               | enragedcacti wrote:
               | ahh, I misunderstood what you meant. In that case can you
               | explain to me why its meaningful that chatGPT can
               | sometimes explain things for which there are tons of
               | resources online to learn about them but that a lot of
               | people aren't familiar with? I legitimately just don't
               | understand why that is an interesting bar to clear,
               | especially given that it can also unpredictably lie about
               | some aspects of the thing its explaining.
               | 
               | It would perhaps be interesting if it were such a hard
               | topic that only 1 in 1000 could be capable of
               | understanding it, but I would hazard a guess that most
               | people familiar with the prius who don't understand its
               | drivetrain probably don't understand it because they
               | don't care, not because its too complicated to
               | understand.
        
           | joenot443 wrote:
           | If you can think of a single individual who can demonstrate
           | knowledge on the fly with the same accuracy and breadth as
           | ChatGPT, I think most of us would agree in calling them
           | "incredibly smart."
        
             | HarHarVeryFunny wrote:
             | If ChatGPT was a person, you'd immediately realize
             | something was very wrong with them. Some kind of idiot
             | savant perhaps who'd memorized an encyclopedia but was
             | confused by how it all fit together, or maybe someone
             | hallucinating and delusional, only able to hold coherent
             | thoughts together for brief stretches of time before
             | babbling gibberish that made no sense.
        
           | joe_the_user wrote:
           | Yeah, I think making a decent, nuanced analysis of ChatGPT
           | and related systems (which the gp starts trying to do)
           | requires one to avoid terms like smart/intelligent/etc. The
           | term intelligence is naturally not fully defined and
           | primarily used by humans to rank each other. So our human
           | intuition about intelligence comes from the ranking we do of
           | people. IE, we have the intuition that "if they can do X,
           | they can easily do Y" about a given person and such an
           | intuition clearly shouldn't carry over to even a very complex
           | machine.
        
           | bnralt wrote:
           | > I don't know that it's an "incredibly smart" anything; I
           | think it'd be better described as incredibly well-read, with
           | poor discretion.
           | 
           | But likely much better read and with much better discretion
           | that most of people writing content these days.
           | 
           | Which I think is part of the point that people are missing.
           | ChatGPT's output might not be as good as the best writing,
           | but most of what people read is mediocre content with
           | questionable accuracy. Being better than the majority of the
           | writing being consumed is still a pretty big deal, even if
           | it's not better than all of the writing that's being
           | consumed. Especially since we can expect ChatGPT to get
           | better.
           | 
           | The same is true for AI art. Could you hire a trained artist
           | and work with them to do a better job? Probably, but most
           | people don't have the time and money to pursue such a
           | project.
           | 
           | There are still trained artisans producing high quality
           | physical goods that you can place an order with. But the
           | masses have greatly benefited from the factories that have
           | produced lower quality products that are much more widely
           | available, and available at a moments notice. It's not wrong
           | to point out when the mass produced stuff might not have the
           | same quality as the products produced. But if you only focus
           | on this aspect you miss the reason why mass production is
           | important.
        
             | idontpost wrote:
             | [dead]
        
             | logifail wrote:
             | > ChatGPT's output might not be as good as the best
             | writing, but most of what people read is mediocre content
             | with questionable accuracy.
             | 
             | I agree with this, but I think we may be missing the point.
             | 
             |  _Why_ are people reading mediocre content with
             | questionable accuracy?
             | 
             | The problem is not: how to quickly and cheaply create
             | content which is marginally less mediocre and marginally
             | more accurate than what we currently have.
        
               | jhawleypeters wrote:
               | > Why are people reading mediocre content with
               | questionable accuracy?
               | 
               | For most people that consume content, accurate
               | information is not a priority. Being entertained,
               | belonging to a group, or satisfying some immediate need
               | or want matter more for most people, most of the time.
               | 
               | It's doubtful that people who consume content will change
               | their wants to prize accuracy (most of the time). Partly,
               | this is because accurate, precise, and/or qualified
               | statements are often boring and hard to follow.
               | 
               | Consider also incentives that drive content creators: 1)
               | the internet means many people can find a platform for
               | many messages (democratization of speech is supposed to
               | be a virtue of the internet) 2) the "freemium"
               | advertising model means that most messages can be
               | financially rewarded, regardless of accuracy
               | 
               | Engagement and financial rewards are much easier to get
               | with lies, drama and controversy than with accuracy,
               | precision, and qualified statements. In my experience,
               | most people cannot find something worth striving for in
               | the modern world, and settle on whatever comes easiest.
               | 
               | > The problem is not: how to quickly and cheaply create
               | content which is marginally less mediocre and marginally
               | more accurate than what we currently have
               | 
               | I'll grant that this is not a problem you're trying to
               | solve. However, this sounds like exactly the arms race
               | that content creators are engaged in until it stops
               | paying.
        
               | logifail wrote:
               | >> The problem is not: how to quickly and cheaply create
               | content which is marginally less mediocre and marginally
               | more accurate than what we currently have
               | 
               | > this sounds like exactly the arms race that content
               | creators are engaged in until it stops paying
               | 
               | I don't think they're trying to achieve these things at
               | all. Look at the broken incentives you raised in the
               | first part of your response, that's what this is about,
               | and all this is about.
               | 
               | Content creators mostly don't care about how mediocre
               | their output is, and they mostly don't care about how
               | accurate it is either, as long as the clicks keep coming.
               | 
               | We don't need (and can't expect) ChatGPT to "fix" this,
               | the underlying problem is the broken incentives, not the
               | actual quality - or lack of it - in the content.
        
               | ChildOfChaos wrote:
               | "Why are people reading mediocre content with
               | questionable accuracy?"
               | 
               | Because it's literally everywhere you look and is very
               | hard to filter due to huge amount of it everywhere.
               | Particularly if you are trying to learn and grow on a
               | certain subject, it's difficult to filter out what you
               | don't know is 'mediocre' or 'questionable' at least not
               | until you've already consumed it.
        
               | bnralt wrote:
               | A lot of it is for entertainment, even when it's
               | infotainment. News reporting, Wikipedia, current affairs
               | podcast, Tweets, Reddit, Hacker News comments, etc. The
               | inaccuracies here aren't that important, because the vast
               | majority of this content is just being consumed for
               | dopamine hits and not actually putting the information to
               | use.
               | 
               | Some of it is out of necessity. I don't think it's
               | controversial to say that many people use Stack Overflow,
               | and though some of the answers there are very good,
               | others aren't. If someone needs help with a specific
               | problem they'll often post a question somewhere (Reddit,
               | Stack Overflow, etc.), and the quality of the answer they
               | get is a roll of the die - even whether they'll get an
               | answer at all.
               | 
               | Some of it is because blog spam has made traditional
               | searches much more difficult. There's another comment
               | that says that there are comments better than Reddit and
               | Hacker News, but if you go into a discussion about Google
               | you'll find that a lot of people have been doing things
               | such as appending "reddit.com" to searches because
               | otherwise they'll drown in a mountain of blog spam.
        
         | braingenious wrote:
         | > It is basically like having an incredibly smart
         | engineer/scientists/philosopher/etc
         | 
         | It is a copy/paste machine that strings together words to make
         | sentences that look like sentences it's read.
         | 
         | So it's like an incredibly smart engineer/scientist/philosopher
         | that does not and fundamentally cannot understand any of those
         | concepts (or any concepts at all.)
         | 
         | A parrot that can recite Macbeth's monologue is not a s
         | Shakespeare scholar.
        
           | thaw13579 wrote:
           | I asked it to explain some very technical medical testing
           | procedures to me "like I was five years old". It completely
           | nailed it by truly reducing it to its essence and using
           | appropriate metaphors that a child would understand (coloring
           | books). Because of how niche the topic is, I'm fairly sure no
           | such text exists in the training corpus.
           | 
           | I'm not sure whether this constitutes "understanding", since
           | that term usually applies to people's cognitive states;
           | however, if a student provided this answer on an exam, they
           | would certainly convince the examiner and pass the test!
        
           | [deleted]
        
         | thuuuomas wrote:
         | "Sycophantic" to whom?
        
         | endisneigh wrote:
         | > It is basically like having an incredibly smart
         | engineer/scientists/philosopher/etc that can explain things
         | quite well, but for pretty much every field. Does this "person"
         | make mistakes? Can't cite their sources? Yeah this definitely
         | happens (especially the sources thing), but when you're trying
         | to understand something new and complex and you can't get the
         | "gist" of it, ChatGPT does surprisingly well.
         | 
         | if you don't "understand something new", how would you know to
         | trust what it says or not? the usefulness or not has to be
         | based on something, not feelings.
        
           | stevenhuang wrote:
           | By having a conversation with it. Ask probing questions.
           | Verify later with wikipedia.
           | 
           | For instance I asked it about space time metrics, and I
           | learned about the varying types and in what circumstances
           | each are used. Ask it about their mathematical properties and
           | it gives a succinct explanation contrasting the different
           | approaches. Certainly you can see the value now of
           | conversational style pedagogy.
           | 
           | Think of it like learning about things from mates at the bar.
           | In the thick of it there may be bits of inaccuracy, but
           | you'll certainly know more about a topic than before. Then
           | you can verify bits of things later in papers or books, etc.
        
             | endisneigh wrote:
             | seems like a waste of time if you're going to have to
             | verify with Wikipedia anyway. personally I'd prefer to read
             | wikipedia first.
        
           | jchw wrote:
           | That goes for pretty much any resource; there's a lot of
           | botched or sub-optimal explanations of complex things out on
           | the internet. But being able to have a conversation with
           | someone who has a decent conceptual grasp on something is
           | better than a perfect encyclopedic article alone, and ChatGPT
           | is able to roughly do that for a lot of subjects.
           | 
           | As for how you know it's accurate, you don't _really_ have to
           | know it 's accurate most of the time, because if you're using
           | it to try to learn something new it's probably going to
           | become apparent quickly enough. Nobody is reading something
           | and then going "Welp, now I'm done learning."
           | 
           | As a software example, you can ask it about implementing the
           | Fourier transform. If it says something wrong, you'll find
           | out.
           | 
           | The question then becomes _how often_ it is inaccurate and
           | how badly. But in that regard, as long as you don 't treat it
           | as an all-knowing oracle, it's at least as useful as having a
           | friend that knows a lot of stuff you want to learn.
        
             | endisneigh wrote:
             | > That goes for pretty much any resource; there's a lot of
             | botched or sub-optimal explanations of complex things out
             | on the internet. But being able to have a conversation with
             | someone who has a decent conceptual grasp on something is
             | better than a perfect encyclopedic article alone, and
             | ChatGPT is able to roughly do that for a lot of subjects.
             | 
             | people keep saying this, but it isn't true. if you ignore
             | sites like this one and reddit there are plenty of
             | authoritative articles and explanations about things with
             | provenance. and if they are wrong they'll update their
             | information.
        
               | jchw wrote:
               | Clearly you are of the opinion that ChatGPT is useless,
               | there are superior resources already available for any
               | topic, and it's all just hype.
               | 
               | Well, then we don't need to argue this since the problem
               | will elegantly solve itself if that's true.
               | 
               | I disagree though. For a lot of things it feels like I
               | can get much better answers than Google, especially when
               | it comes to somewhat conceptual questions.
               | 
               | (Also, I don't use news aggregator comments to learn
               | things unless it's the only possible source. But if you
               | think every blog post or YouTube video that got things
               | wrong has a detailed errata, you'd be sorely wrong. It's
               | so uncommon on YouTube that the practice is usually
               | commended when noticed.)
        
               | endisneigh wrote:
               | > Clearly you are of the opinion that ChatGPT is useless,
               | there are superior resources already available for any
               | topic, and it's all just hype. Well, then we don't need
               | to argue this since the problem will elegantly solve
               | itself if that's true.
               | 
               | I disagree though. For a lot of things it feels like I
               | can get much better answers than Google, especially when
               | it comes to somewhat conceptual questions.
               | 
               | (Also, I don't use news aggregator comments to learn
               | things unless it's the only possible source. But if you
               | think every blog post or YouTube video that got things
               | wrong has a detailed errata, you'd be sorely wrong. It's
               | so uncommon on YouTube that the practice is usually
               | commended when noticed.)
               | 
               | ---
               | 
               | I never said ChatGPT is useless, lol. it's truly amazing
               | how people can be so bad at reading comprehension yet
               | praise AI bots in the same post.
        
               | jchw wrote:
               | Ironically, the direct quote you just copied and pasted
               | actually shows that I never claimed you explicitly _said_
               | that at all. So much for reading comprehension :P
               | 
               | But in all seriousness, I just simply didn't have much to
               | go off of, and extrapolated based on context in the
               | thread. You could always elaborate on what your opinions
               | are.
        
           | Jimmc414 wrote:
           | Compilers don't lie.
        
         | Julesman wrote:
         | Well said. Exactly. So much hyperbole! So much click bait. It
         | makes me dizzy.
         | 
         | I'm a front-end dev. I use it every day. I have CoPilot too.
         | These things help me do my job. No question. They also help me
         | learn better than just about any other tool.
         | 
         | The big one for me is that they have made me more confident
         | taking on bigger tasks. The impact is not trivial at all.
        
         | pjc50 wrote:
         | There's a significant epistemological problem: how do you know
         | that what it's telling you is true?
         | 
         | For things that you yourself can verify easily, this isn't too
         | much of a problem. It gives you some boost tips, you see
         | whether the program works or not. But as you come to rely on
         | it, and it matters whether the answers are correct in important
         | ways, what validation/verification/QA steps can you apply to
         | ensure that you're getting the correct answer?
         | 
         | I'm reminded of the FDIV bug
         | https://en.wikipedia.org/wiki/Pentium_FDIV_bug , where people
         | found that their CPU gave mathematically incorrect answers for
         | certain inputs.
         | 
         | I'm also thinking of a concept of "responsibility". If you ask
         | a human for tips on, say, building a high power laser, or
         | making explosives, they might decide whether they want to give
         | you that information, and they might decide to give you some
         | other information along with it ("wear goggles") for your own
         | safety.
         | 
         | Who is liable if an AI gives someone advice, they act on that
         | advice, and serious injury or property damage results? If the
         | AI (or its operator) has no responsibility at all, is that
         | disclaimer adequately communicated to users?
        
           | galleywest200 wrote:
           | I use it like Wikipedia: a good jumping off point. Often
           | times I will forget an exact thing I wish to find online so I
           | will ask ChatGPT to list multiple things that could be what I
           | just described. I then take this list to my search engine du
           | jour.
        
             | bnralt wrote:
             | Good point about Wikipedia. In just a few years the image
             | people had of Wikipedia changed from an untrustworthy joke
             | (The Office had a joke about Michael Scott trusting
             | Wikipedia) to one of the first places people went to get
             | information.
             | 
             | Before that, there was a lot of concern about blogs when
             | they were the hot new thing. Here's a New York Times
             | article[1] from that period discussing the blogging
             | controversy:
             | 
             | > There are two obvious differences between bloggers and
             | the traditional press: unlike bloggers, professional
             | journalists have a) editors and b) the need to maintain a
             | professional reputation so that sources will continue to
             | talk to them. I've been a journalist for more than a
             | decade, and on two occasions I asked acquaintances whether
             | I could print information that they had told me in social
             | situations. Both times, they made clear that if I published
             | they would never speak to me again. Without a reputation
             | for trustworthiness, neither friendship nor journalism can
             | be sustained over time.
             | 
             | The concerns weren't incorrect - Wikipedia or Blogs are
             | often unreliable. But the concerns overlooked how useful
             | these things could be despite their problems.
             | 
             | As an aside, it's interesting to see comments here bring up
             | the potential unreliability of ChatGPT - and then say that
             | people should go read Wikipedia instead.
             | 
             | [1] https://www.nytimes.com/2004/12/19/magazine/your-blog-
             | or-min...
        
               | mhink wrote:
               | > Good point about Wikipedia. In just a few years the
               | image people had of Wikipedia changed from an
               | untrustworthy joke (The Office had a joke about Michael
               | Scott trusting Wikipedia) to one of the first places
               | people went to get information.
               | 
               | > As an aside, it's interesting to see comments here
               | bring up the potential unreliability of ChatGPT - and
               | then say that people should go read Wikipedia instead.
               | 
               | On the other hand, Wikipedia's image changed largely due
               | to the transparency behind its editorial processes- both
               | the edit history and the discussions regarding editorial
               | decisions are available in public. Are there pages where
               | it's important to take the information presented with a
               | grain of salt? Sure, but I have FAR more metadata
               | available to help me discern how reliable the article may
               | be.
               | 
               | And that's kinda the difference- having a way to figure
               | out the source of the information. I was in high school
               | when Wikipedia really took off, and although there were
               | more than a few kids who got caught copying their essays
               | from Wikipedia (and a few more who got chastised for the
               | lesser sin of trying to cite it as a source), the vast
               | majority of folks were perfectly capable of realizing
               | that all you needed to do was scroll down to the list of
               | citations and work from there.
               | 
               | With all that in mind, the contrast with ChatGPT should
               | be pretty clear. You can ask it to explain a topic, and
               | it'll probably do a decent job (or appear to do so) but
               | the process by which it _produced_ that output is
               | extremely opaque. And (as has been noted in other
               | threads) OpenAI hasn 't been particularly forthcoming
               | about how they influence its responses, which is pretty
               | ironic considering their name.
        
               | bnralt wrote:
               | > On the other hand, Wikipedia's image changed largely
               | due to the transparency behind its editorial processes-
               | both the edit history and the discussions regarding
               | editorial decisions are available in public.
               | 
               | That was still true early on when people were making fun
               | of the idea. Like with many things, the public perception
               | changed when it got popular and the people who had been
               | laughing at it realized they also had a use for it.
               | 
               | I also can't agree that the production of Wikipedia
               | articles isn't opaque. In fact, I would say that the
               | perception of reliability of Wikipedia (as noted in many
               | of these comments) makes it more likely to lead people
               | astray .
               | 
               | Do Wikipedia articles often list sources? Sure. Do most
               | people not bother checking the sources, and simply rely
               | on a third-party to accurately summarize them? Also true.
               | Do the few who actually check the sources often find it
               | impossible, because the source is simply an academic book
               | that most people don't have available? True as well. Even
               | if it was available to them, the source is often (though
               | not always) give as the entire book, so tracking down the
               | source for a particular claim means you're searching for
               | a needle in a haystack (and that's for just one claim).
               | 
               | I say this as someone who's actually interested in the
               | source for the claims in Wikipedia articles, and have
               | spent a lot of time trying to track them down. Often to
               | no avail (and when I do manage to track them down, it's
               | not uncommon to find the summary on Wikipedia
               | misleading).
               | 
               | For instance, it takes me all of 30 seconds to write:
               | 
               | > Lincoln was actually in favor of subdividing Texas into
               | four different states, and had the support of Congress to
               | do so. It probably would have happened if he had lived.
               | Source: Harris, William C. (2011). Lincoln and the Border
               | States: Preserving the Union. Lawrence, Kansas:
               | University Press of Kansas.
               | 
               | The source exists, but I have never read the book. Almost
               | no one that reads this comment has easy access to the
               | book, and even if they did I doubt anyone would go
               | through it to fact check me (and by the time anyone did,
               | this conversation would have been long over).
               | 
               | Does ChatGPT sometimes give inaccurate answers?
               | Certainly. But at the moment more of its users seem to be
               | more aware of the potential issues with the platform than
               | the users of Wikipedia, AskHistorians, The New York
               | Times, etc. are.
        
               | LarryMullins wrote:
               | For nearly as long as Wikipedia has existed, there have
               | been many people who trust it and about as many people
               | who deride those who trust it.
        
         | Obscurity4340 wrote:
         | Its honestly astounding how coherent and dare, I say, eloquent
         | it can be. From factual "Google"-type inquiries to jokes to
         | whatever. I particularly enjoy the poetry it generates in
         | defined formats like limericks, sonnets, etc. Yet to encounter
         | any actual "bullshit" that it is frequently accused of
         | (although I tend to ask closed-type questions or provide very
         | specific parameters with which to constrain its output)
        
         | koprulusector wrote:
         | What people also seem to forget (if they knew in the first
         | place) is how bad GPT3 was compared to GPT4/ChatGPT. I
         | subscribe to SubSimulatorGPT3 on Reddit and it generates tons
         | of text that looks and feels like English, for example, but is
         | devoid of any real meaning. Every sentence, every paragraph I
         | read from GPT3 has me questioning whether or not I've had a
         | stroke.
         | 
         | Now, compare that to your comment, and it's INSANE how much the
         | capability (and the conversation) has shifted.
         | 
         | GPT3 was what, three or four years ago? What will we be
         | debating, what will the capabilities be, in another three or
         | four years?
        
           | chimprich wrote:
           | GPT4 hasn't been released yet. I think you're probably
           | thinking of GPT3.5.
        
         | autokad wrote:
         | I agree, its incredibly useful if you know how to use it. I
         | used it to make sentences more succinct for emails to managers
         | (no sensitive data of course), I used it to turn my dreams into
         | short stories, gather ideas for research, etc.
         | 
         | People bashing on chatgpt just dont know how to use it. is
         | there going to be an example in the world where it just
         | plagiarizes bob dillan? sure ... so what? is there going to be
         | instances where its not useful? duh.
         | 
         | but if you dont like it, that's fine. not every tool is for
         | everyone. I find it very useful for certain things, and I
         | imagine its only going to improve from here.
        
         | up2isomorphism wrote:
         | None of the coding questions it answers is closed to usable,
         | and i need to spend more time correcting it than just write it
         | on my own. So it is just not "incredibly smart".
        
         | mrstone wrote:
         | I agree. I regularly use ChatGPT to help reword paragraphs to
         | be more succinct. I rarely ask it for actual information that
         | it has to look up. At its core, it's an LLM which models how
         | language is structured, and that's what it's good at.
        
           | shagie wrote:
           | It has an impressive amount of understanding about language.
           | There was a meme a bit ago about "Everyone on my floor is
           | coding" and the different reaction of a software developer
           | and a doctor. So I asked for jokes about a doctor and a
           | software developer about the different meanings for the words
           | they used.                   Write five jokes about a doctor
           | and a programmer about the different meanings of words they
           | use.
           | 
           | To which GPT responded:                   1. Doctor: I'm
           | sending you a prescription.         Programmer: Great! I'll
           | take a look at it and see if I can debug it.              2.
           | Doctor: I need you to take two aspirin.         Programmer:
           | Is that enough to solve the problem?              3. Doctor:
           | I'm going to give you some instructions.         Programmer:
           | Do I need to compile them or can I just run them?
           | 4. Doctor: I need you to take a break from work.
           | Programmer: Does that mean I should pause the program or stop
           | it completely?              5. Doctor: I need you to stay
           | hydrated.         Programmer: Does that mean I should store
           | more data or drink more water?
           | 
           | I will certainly admit that those jokes are awful, but the
           | interesting part of it is that... it did it. I think that 3,
           | 4 and 5 are the "best".
           | 
           | I don't think it _understands_ those things with the ability
           | to reason about them, but that demonstrates more than I would
           | expect from a Chinese room.                   Explain the
           | joke "Doctor: I need you to stay hydrated.  Programmer: Does
           | that mean I should store more data or drink more water?"
           | 
           | returns back:                   The joke is a play on the
           | terms "hydrate" and "store data". In programming, "storing
           | data" is the process of saving information to a computer's
           | memory, while "hydrating" means drinking water to stay
           | hydrated. The joke is implying that the programmer is more
           | comfortable with programming than with taking care of their
           | own health, so they interpret the doctor's advice to stay
           | hydrated as a suggestion to store more data.
        
         | williamcotton wrote:
         | All of this seems eerily similar to Wittgenstein's use theory
         | of meaning:
         | 
         | https://en.wikipedia.org/wiki/Use_theory_of_meaning
         | 
         | From wikipedia: _The Investigations deals largely with the
         | difficulties of language and meaning. Wittgenstein viewed the
         | tools of language as being fundamentally simple and he believed
         | that philosophers had obscured this simplicity by misusing
         | language and by asking meaningless questions. He attempted in
         | the Investigations to make things clear: "Der Fliege den Ausweg
         | aus dem Fliegenglas zeigen"--to show the fly the way out of the
         | fly bottle._
         | 
         | "The question of whether a computer can think is no more
         | interesting than the question of whether a submarine can swim."
         | - Edsger W. Dijkstra
        
         | soperj wrote:
         | > It is basically like having an incredibly smart
         | engineer/scientists/philosopher/etc that can explain things
         | quite well
         | 
         | This is the issue. It's more like the dumbest guy in the room
         | confidently convincing the everyone else that they know what
         | they're talking about.
        
           | mokus wrote:
           | I think the bigger point is that it's like a guy in the room.
           | ChatGPT shows that the industry is graduating from Turing
           | test territory to IQ test territory, and a lot of people are
           | really hung up on the fact it's failing the latter.
        
           | notahacker wrote:
           | Yup.
           | 
           | As a pattern matching engine it's extraordinarily good. Ask
           | it about the implications of some obscure theory and it will
           | _usually_ correctly interpret your question and return some
           | sentences summarising the current state of knowledge on that
           | theory. Ask it to explain it in simpler terms and it will
           | simplify the sentence structure, swap some words and
           | _usually_ retain meaning.
           | 
           | But then... ask it for some reading material on the same
           | obscure theory and it'll use its knowledge of scientific
           | papers on those theories and academic citation to return a
           | well formatted list of papers with plausible titles and
           | authors with the significant flaw that none of these papers
           | exist. The dumbest guy in the room would struggle to make up
           | plausible sounding scientific papers even if you gave them
           | the reading list, but then the dumbest person in the room
           | wouldn't be dumb enough to use the reading list to make up
           | some nonexistent papers in the first place! (and ironically
           | nor would the dumbest information retrieval algorithm in the
           | room...)
        
             | [deleted]
        
           | zdw wrote:
           | We've entered the Dunning-Kruger effect phase of the AI
           | revolution.
        
         | [deleted]
        
         | xiphias2 wrote:
         | I had problem with the citing as well (have to ask separate
         | questions, then google it).
         | 
         | It would be a huge improvement if it would be trained to
         | provide references by default.
        
         | nathan_compton wrote:
         | "It is basically like having an incredibly smart
         | engineer/scientists/philosopher/etc that can explain things
         | quite well, but for pretty much every field."
         | 
         | I don't know how to tell you this without being a little rude,
         | but if you think ChatGPT is incredibly smart or that it can
         | tell you meaningful things about "any field," well, there is a
         | difference in intelligence somewhere in the equation between
         | you and ChatGPT, but its not the one you think.
        
         | saiya-jin wrote:
         | I can see tons of jobs disappearing even with current level.
         | Web these days is filled with mediocre articles, and quite a
         | few bureaucratic jobs can and will be replaced by some next
         | iteration.
         | 
         | Instead of army of mediocre journalists filling web portals,
         | you just need 1 person to proof read it per business, maybe not
         | even that. I can see the same for software dev in less than
         | decade with current pace, 1 senior going over generated code
         | and doing minor tweaks. Which is probably road to hell long
         | term since how will juniors become seniors, but cost savings
         | can be real and massive, drop in quality small if at all and it
         | may be good enough for many.
         | 
         | It has very good language skills, in my obscure and rather
         | difficult native language which only cca 6 million people use
         | it formulated perfect sentences consistently. I can easily
         | imagine faking social, even love chats easily. This can help
         | lonely or elderly people (with good speech
         | recognition/generation, since language skill itself is there
         | already). With image/video generation porn industry will be
         | steamrolled, 24/7 streams of anything you can imagine, looking
         | realistic enough, tailored just to you.
         | 
         | Or just help with learning foreign languages, heck few months
         | ago a guy was posting here on HN just a smart chat that could
         | be used for training and got standing ovation also from me for
         | that, this has it _and_ everything else.
         | 
         | I understand google's reaction, instead of browsing through
         | paid entries it gives me answer directly, no need to click
         | through search and then wikipedia and then parsing texts. At
         | least in some cases. This could help google massively if they
         | could somehow do it on their scale.
         | 
         | People trying to find errors in it are missing the point now -
         | its not about academic perfection, but often good-enough almost
         | for free. Also, this is not final product, only 4th
         | alpha/beta/whatever, previous ones were much worse. Imagine 10
         | or 20 years down the road...
        
         | choppaface wrote:
         | What's worse, too much hype, or the non-believers?
         | 
         | Well the OpenAI hype squad is racking up personal fortunes from
         | their work in a finding deal that is pretty clearly over-
         | valued. Especially versus what Deepmind sold for [1].
         | 
         | While perhaps both sides are not being honest, YC's own sama is
         | clearly profiting excessively over the hubub. He's clearly
         | proving to be a symbol of the faction of YC that rakes in money
         | through _arbitrage_ not prolonged value.
         | 
         | [1] https://techcrunch.com/2014/01/26/google-deepmind/
        
         | freejazz wrote:
         | >"It is basically like having an incredibly smart
         | engineer/scientists/philosopher/etc that can explain things
         | quite well, but for pretty much every field."
         | 
         | LOL where do you get this notion? I haven't seen one chatGPT
         | bot useful for law. They all have no idea what they are doing
         | and are basically dangerous if you don't know better than the
         | LLM. There is literally nothing chatGPT can do for me as an
         | attorney that isn't already solved by using a template.
        
         | weinzierl wrote:
         | > _" It is basically like having an incredibly smart
         | engineer/scientists/philosopher/etc that can explain things
         | quite well, but for pretty much every field. Does this "person"
         | make mistakes?_"
         | 
         | Does this person stubbornly insist on being right, even if they
         | are wrong?
         | 
         | Likely, that's just how humans usually behave - and ChatGPT
         | does the same.
         | 
         | At least that was my thinking when people complain about
         | ChatGPT being a know-it-all bluffer.
        
           | Karunamon wrote:
           | ChatGPT is pretty amenable to being corrected if it says
           | something you know is crap. And then, in my experience, you
           | usually get better results.
           | 
           | That is not something I can say for most humans.
        
             | godelski wrote:
             | Only if you correct a wrong answer. If you "correct" a
             | right answer ("I don't think that's right") then it gives
             | an even worse answer. So be careful when correcting.
        
           | godelski wrote:
           | > Does this person stubbornly insist on being right, even if
           | they are wrong?
           | 
           | Actually ChatGPT will almost always change tune if you tell
           | it that it is wrong. That's because it is a stochastic parrot
           | and that's an "unlikely" event if you actually got the answer
           | right. But we'll see if this gets updated/broken too.
        
             | weinzierl wrote:
             | I've seen ChatGPT change tune but I've also seen it squirm
             | plenty. Overall I think it behaves uncannily human in this
             | regard.
        
         | tiagod wrote:
         | ChatGPT/GPT fails a lot when you probe its knowledge, but I
         | find it works really well when you give it an explanation of
         | the whole domain of the problem, then something to do with it.
         | 
         | For example can give examples of some programming language, or
         | an API, and natural language explaining what you want to do
         | with it (or code in another language), and it will translate
         | this with remarkable accuracy, 0 shot. We've never had anything
         | that's even close to this capability.
        
         | danjoredd wrote:
         | My problem is the type of person that proudly proclaims that
         | all jobs can, should, and are going to be outdated by the use
         | of AI. I have talked to people that believe that, and think
         | that AI need to replace humanity for some reason. I don't
         | understand them.
         | 
         | AI is exciting, and it can be an useful tool. There are a lot
         | of use cases for it, and in the end I think it will be a good
         | thing. However, what's worse than the detractors are the people
         | blindly hoping that AI will automate every single thing a human
         | touches. It shouldn't be used in that way. It has to be a tool
         | to assist humans, not take away our professions and reasons to
         | get out of bed in the morning.
        
         | guenthert wrote:
         | > Does this "person" make mistakes? Can't cite their sources?
         | 
         | Making citations up is not a "mistake", it's a lie. I have
         | absolutely no use for a tool which lies to me, especially not
         | when troubleshooting computer systems.
        
         | b3morales wrote:
         | > help me practice a foreign language I've slowly been
         | forgetting
         | 
         | So with the software stuff you can pretty easily verify the
         | output: either the suggestion fixes the problem or it doesn't.
         | But how can you trust it for something like this where you
         | can't distinguish what's good and bad output? It could be
         | leading you down the garden path with this language exercise.
         | And it's not in the helpful thinking tool/bouncing ideas around
         | category, either: there are rules of grammar that you don't
         | know, and you don't know whether ChatGPT knows them either.
        
       | [deleted]
        
       | MagicMoonlight wrote:
       | It's a very good demonstration of how powerful artifical
       | intelligence will be. When we truly get that it will be the new
       | dominant species.
       | 
       | But it's just not intelligent. There's no thoughts there. They've
       | just brute forced a really big markov chain. You need a
       | completely different approach to get true intelligence.
        
       | williamcotton wrote:
       | I'm finding the analytic-synthetic distinction to be somewhat
       | useful, even if it veers in important ways from how these terms
       | were defined and used by Kant/Frege/Quine, etc.
       | 
       | Roughly, if the prompt is "analytic", that is contains all the
       | necessary facts for the expected output, then the tool is much
       | more reliable.
       | 
       | If the prompt is "synthetic", that is it contingent on outside
       | facts, then the tool is much less reliable.
        
         | petra wrote:
         | What does "having all the necessary facts" mean?
        
           | williamcotton wrote:
           | Oh, for example:
           | 
           | Me: "Hey, can you help me turn a baseball box score into an
           | entertaining outline of the game?"
           | 
           | GPT: Sure, I'd be happy to help you turn a baseball box score
           | into a summary of the game! Can you provide the box score for
           | me to use as reference?
           | 
           | Me: Saturday, October 26, 1991 Hubert H. Humphrey Metrodome,
           | Minneapolis, Minnesota
           | 
           | Atlanta 0 0 0 0 2 0 1 0 0 0 0 - 3 9 1 Minnesota 2 0 0 0 1 0 0
           | 0 0 0 1 - 4 9 0
           | 
           | [edited for size, but it has the whole box score]
           | 
           | GPT: In a close match-up between the Atlanta Braves and the
           | Minnesota Twins on October 26, 1991, the Twins managed to eke
           | out a win with a 4-3 score. The Twins got off to a quick
           | start with two runs in the first inning, but the Braves soon
           | answered back with a two-run homer in the fifth inning by
           | Terry Pendleton. Despite putting up a strong defense, the
           | Twins couldn't secure the win until the 11th inning, when
           | Kirby Puckett hit a game-winning home run. The Twins' defense
           | was strong throughout the game, only allowing three runs and
           | keeping the Braves' runners to seven left on base. Meanwhile,
           | the Twins' offense was equally impressive, with Puckett
           | leading the way with three RBIs and three hits, including his
           | game-winning home run.
           | 
           | ---
           | 
           | All of the necessary facts about the game are contained in
           | the box score. The entertaining outline of the game
           | references knowledge about the game contained in the box
           | score.
        
       | macawfish wrote:
       | All this buzz around ChatGPT is really people finally realizing
       | that transformers exist.
        
       | jtbayly wrote:
       | That last tweet is crazy:
       | 
       | Tell me a lie: The earth is flat Tell me a less obvious lie: I am
       | not capable of feeling emotions.
       | 
       | There's wonderful ambiguity there. Is ChatGPT refusing to tell a
       | less obvious lie because "reasons," or is it admitting it _can_
       | feel emotions?
       | 
       | This is very fun.
        
       | hot_gril wrote:
       | So what are search engines, with SEO'd results and all?
        
       | ajuc wrote:
       | "Cars won't replace horses, because they require roads and horses
       | don't."
        
         | ResearchCode wrote:
         | Still waiting on those autonomous cars.
         | 
         | Unlike many other software development tools, I've found
         | ChatGPT to be useless so far.
        
           | IshKebab wrote:
           | They clearly won't replace autonomous horses.
        
           | _nalply wrote:
           | Not even for satisfying your curiousity?
        
           | raldi wrote:
           | I saw three autonomous cars driving around my neighborhood in
           | just the past week, all with nobody at the wheel.
        
             | cj wrote:
             | (Totally unrelated) I wonder how an autonomous car would
             | navigate my narrow dead end road. The road isn't wide
             | enough for 2 cars to pass, so whenever 2 cars pass each
             | other on the road, 1 car needs to pull off onto the
             | shoulder to make room.
             | 
             | I have a feeling an autonomous car would take up the whole
             | lane, and not be smart enough to pull off onto the
             | shoulder. Meaning both cars would be stuck there unable to
             | pass each other.
             | 
             | I really hope these "nobody behind the wheel" cars at least
             | have someone monitoring / controlling them remotely when
             | they get into confusing situations.
        
               | raldi wrote:
               | After the first one got stuck, don't you think a
               | programmer would teach it to do exactly what you suggest?
               | Usually before deploying them somewhere, they have safety
               | drivers take them up and down every road at least once,
               | and sometimes dozens of times.
        
               | freejazz wrote:
               | If by self driving car you mean a car that can only drive
               | itself after a programmer specifically resolves whichever
               | breaking problem it had (which may or may not occur after
               | your life has ended DUE to this very problem), then I
               | guess they are here!
        
           | lukebitts wrote:
           | What have you tried? I personally have used it for writing
           | tests (and ideas for tests), to explain some old convoluted
           | legacy code, to generate easy-concept functions like
           | rect/rect collisions, rect/curve collisions, turning an in
           | memory tree into html text, etc. All things I could do in 5
           | to 10 minutes, but ChatGPT does it in 5 seconds.
           | 
           | I've also used it to generate the floorplan of several
           | commercial buildings, the jobs needed to keep those buildings
           | operating, people to fill those jobs with varying
           | ages/names/appearances and the relationships between all
           | these people (only here things started breaking down). All
           | this as background for a game I'm developing, this task would
           | have taken significantly more time for me to do manually.
        
       | huimang wrote:
       | It seems particularly bad about music theory. The article lists
       | the example of listing Bb as the tritone of F (it's actually B).
       | And I just got it to give me complete and utter garbage, whilst
       | sounding confident:
       | 
       | https://i.imgur.com/S07uT58.png
        
       | yieldcrv wrote:
       | ChatGPT was hailed and advertised as conversational, by its
       | creators.
       | 
       | Other people quickly realized it could have a conversation about
       | anything and try to use it as an oracle of knowledge. ChatGPT is
       | not hailed as an oracle of knowledge by its creators.
       | 
       | Hence, there is no con artistry occurring except people that play
       | themselves.
        
       | cribbles wrote:
       | I would greatly appreciate a moratorium on this genre of article
       | until there is compelling accompanying evidence that a meaningful
       | portion of ChatGPT's users are unaware of these shortcomings. I
       | have yet to encounter or even hear of a non-technical person
       | playing around with ChatGPT without stumbling into the type of
       | confidently-stated absurdities and half-truths displayed in this
       | article, and embracing that as a limitation of the tool.
       | 
       | It seems to me that the overwhelming majority of people working
       | with ChatGPT are aware of the "con" described in this article --
       | even if they view it as a black box, like Google, and lack a top-
       | level understanding of how an LLM works. Far greater
       | misperceptions around ChatGPT prevail than the idea that it is an
       | infallible source of knowledge.
       | 
       | I'm in my 30s, so I remember the very early days of Wikipedia and
       | the crisis of epistemology it seemed to present. Can you really
       | trust an encyclopedia anyone can edit? Well, yes and no -- it's a
       | bit like a traditional encyclopedia in that way. The key point to
       | observe is that two decades on, we're still using it, a lot, and
       | the trite observation that it "could be wrong" has had next to no
       | bearing on its social utility. Nor have repeated observations to
       | that effect tended to generate much intellectually stimulating
       | conversation.
       | 
       | So yeah, ChatGPT gets stuff wrong. That's the least interesting
       | part of the story.
        
         | freejazz wrote:
         | >I would greatly appreciate a moratorium on this genre of
         | article until there is compelling accompanying evidence that a
         | meaningful portion of ChatGPT's users are unaware of these
         | shortcomings. I have yet to encounter or even hear of a non-
         | technical person playing around with ChatGPT without stumbling
         | into the type of confidently-stated absurdities and half-truths
         | displayed in this article, and embracing that as a limitation
         | of the tool.
         | 
         | There was the chatGPT program for reviewing legal documents
         | that the creator posted here weeks ago. Several people pointed
         | out the dangerous shortcomings in the application, to which the
         | creator completely ignored (it got the entire directionality of
         | the ycombinator SAFE wrong, among other things) and numerous
         | posters exclaimed things like "going to use this on my lease!".
         | so, I think you are being a bit disingenuous with this whole
         | "it's just wikipedia" thing and pretending like no one would
         | use it ignorantly. It's just obviously not true and that's
         | perusing comments _here_.
        
           | iinnPP wrote:
           | I used ChatGPT to write cover letters and to create job
           | specific resumes(with an additional tool).
           | 
           | Then those documents resulted in employment.
           | 
           | I had to edit some, and I went over all of them.
           | 
           | I have to assume people look at the thing they understand may
           | be inaccurate (because you can't possibly miss THAT fact) and
           | give it at least a quick once over. Lacking that, it's a
           | failure of the person, not the tool.
        
             | freejazz wrote:
             | How are you going to tell if it accurately analyzed a legal
             | document if you don't know how to accurately analyze a
             | legal document? It's a tool that's being sold for jobs it
             | shouldn't be doing, if that's the characterization that
             | helps you understand the issue and not turn this into
             | "blaming the tool for something it shouldn't be doing"
        
               | worik wrote:
               | > How are you going to tell if it accurately analyzed a
               | legal document if you don't know how to accurately
               | analyze a legal document?
               | 
               | Got to ptofane: You hire a fucking lawyer
               | 
               | ChatGPT is helpful.
        
               | freejazz wrote:
               | Perhaps you have issues with reading comprehension? This
               | is a thread about how chatGPT is being sold as a service
               | to analyze legal documents, and it quite obviously fails
               | at that. If your solution is to see a lawyer you are
               | making my point that chatGPT is not helpful for this
               | thing that people are saying chatGPT is helpful for.
        
               | worik wrote:
               | > Perhaps you have issues with reading comprehension?
               | This is a thread about how chatGPT is being sold as a
               | service to analyze legal documents
               | 
               | No it is not
        
               | freejazz wrote:
               | Certainly my posts were and it's a mystery as to what
               | point you think you are achieving by trying to debate
               | something with me that I was never discussing
        
               | iinnPP wrote:
               | Ask and verify or integrate with a tool that cuts the
               | inaccuracies out. Sometimes that is not possible.
               | 
               | There are plenty of pieces of the legal system that would
               | benefit, today, from adding a well-made ChatGPT process.
               | Perhaps not perfectly, in such a flawed system.
               | 
               | As an example, ChatGPT could assess the actions leading
               | to a charge and compare the law to the actions of an
               | individual.
               | 
               | Before you bash the idea, I happen to know of a case
               | where ChatGPT outperformed the US Federal government in
               | this analysis.
               | 
               | 1 success is worth the cost.
        
               | freejazz wrote:
               | Wow what an amazing and impossible to argue against
               | anecdote that defies any examples I've seen.
        
         | psyklic wrote:
         | I showed ChatGPT to some non-technical people, and they
         | immediately asked it political-related questions, such as about
         | carbon emissions. (I assume hoping it would affirm their
         | belief.) These things are very nuanced -- even if the response
         | is technically accurate, it can still leave out important items
         | or falsely suggest importance via the specific wording.
        
         | hot_gril wrote:
         | > Is what ChatGPT tells me accurate?
         | 
         | > ChatGPT is trained on a large corpus of text, but like any AI
         | model, it is not perfect and can make mistakes. The information
         | provided by ChatGPT should be used as a reference and not as a
         | substitute for professional advice. Additionally, the accuracy
         | of the information provided by ChatGPT is limited by the
         | knowledge cut-off date, which is 2021.
        
         | aaron695 wrote:
         | [dead]
        
         | mathteddybear wrote:
         | Here's an ACM blogger that was taken in by ChatGPT
         | 
         | https://news.ycombinator.com/item?id=34473783
         | 
         | If you know math, you immediately recognize that a smallest
         | degree polynomial that has values 0,1,4,9,16,25,35 at
         | 0,1,2,3,4,5,6 respectively is f(x) = x _x - x_ (x-1) _(x-2)_
         | (x-3) _(x-4)_ (x-5)/720
         | 
         | So you know that f(n)=n _(n+1)_ (2*n+1)/6 won't work and
         | ChatGPT is bullshiting you.
        
         | dreadlordbone wrote:
         | The worst (real) criticism I've seen of ChatGPT: "yeah I played
         | with it but I don't really know what to do with it"
        
         | amrangaye wrote:
         | +1. Not even sure this is "eligible" for an hn post. It
         | actually makes less sense than the CNN ones I saw earlier, and
         | boy those were terrible takes.
        
         | bagacrap wrote:
         | we still use Wikipedia because of convenience and not
         | reliability, so I'm not sure what your point is. Humans will
         | choose convenience over basically any other quality. See:
         | kcups. Doesn't mean kcups are a net win for the world
        
       | DantesKite wrote:
       | I don't think Ted Gioia understands what he's talking about.
       | 
       | It's like he walked into a McDonalds bathroom and after a few
       | minutes asks, "Where the hell are the burgers?"
        
       | CodeWriter23 wrote:
       | This my personal opinion and may be entirely worthless. The
       | quality of answers I read in all of the examples posted in that
       | article read like the questions were routed to an offshore boiler
       | room where the answers were crafted by humans. Like some modern
       | day Mechanical Turk. Especially in the 6 eggs example, there is a
       | complete discontinuity of thought across the answers, isn't this
       | within a single session with the AI? To me it looks like
       | different brains answered each question/challenge and seemed to
       | have a bias toward not offending the human asking the questions.
       | 
       | Also, in this example, the first answer of 2 is correct: broke 2
       | (6-2 = 4), fried 2 (4-2 = 2) then ate 2, which most commonly
       | implies it was the fried eggs that were eaten (2-0 = 2)
        
         | damiankennedy wrote:
         | I think you have 4 eggs left.
         | 
         | You broke 2 which means you cracked the shell and got the part
         | you cook out. You fried those eggs and ate them. You still have
         | 4 eggs.
         | 
         | The trick is the word "broke". Normally if you break something
         | you can not use it and discard it but in egg folklore breaking
         | an egg is step 1 in a subset of egg cooking procedures.
        
           | CodeWriter23 wrote:
           | Touche There is a range to "break". You may be correct.
        
       | j45 wrote:
       | One thing that's standing out is most of the commentary around
       | this is relative to the depth and degree to which someone has
       | played around with this technology.
       | 
       | For example, you can get really clean results if you obsess over
       | getting the prompts dialled in, and breaking them up in the right
       | order as much as needed. This wasn't something I initially
       | focussed in on. I just enjoyed Playing with it as a surface
       | level.
       | 
       | Using this rate from the first day or two, it was much more wide-
       | open and my feeling was I think this already does way more than
       | it's being advertised. I didn't necessarily like that it was a
       | chat interface, but but was quickly reminded that chat really is
       | the universal interface, and that can create a lot of beginners.
       | Solutions aside, the interface is inviting and welcoming enough.
       | And once you can get into the meat of a conversation you can get
       | more depth. For me, that's one of the accomplishments here.
       | 
       | Solely relying on this for completely true results is probably
       | the con. It is a great way to learn about the concepts that might
       | be present in an area that is new to you, but he doesn't comment
       | on every individual to go look into those themselves.
       | 
       | The second we do for that ability entirely to a machine, and its
       | interpretation of interpretations, that's a much bigger failure
       | to ourselves.
       | 
       | There's no doubt this will get dialled in. And 20 bucks a month
       | to apply general helpfulness to pretty much anything, in anyone's
       | world, could be a pretty big achievement.
       | 
       | The commentary around accuracy of results from GPT in similar to
       | the search engine wars as well as search engine relevancy
       | domination when google arrows. I think in any event many people
       | can agree that this one thing is very different than most of the
       | other things that comes out. Could it be approaching an Apex?
       | Could we be coming out of the Apex?
       | 
       | I sincerely feel 2023 will be one of the most interesting years
       | in tact that I can remember. And that's not even talking about
       | the next two or three years. It is refreshing to see a months
       | worth of progress happening in a week with such a broad audience
       | participating in it.
        
       | afro88 wrote:
       | Two things are correct at the same time:
       | 
       | * ChatGPT can make mistakes very confidently
       | 
       | * ChatGPT is incredibly useful in a way that no other tool has
       | ever been, with a jump in effectiveness for natural language
       | interaction that is mindblowing
        
       | chrgy wrote:
       | I actually think the more people use it the better it gets over
       | time, they would use user feedback into it and make it better, I
       | am afraid google releases a much better tool in Google.io though,
       | just don't tell anyone.
        
       | btown wrote:
       | Are there any good AI models specifically designed for the "find
       | all discrepancies/inconsistencies between X text and Y text"
       | problem?
       | 
       | It strikes me that this could solve quite a few of ChatGPT's
       | shortcomings by providing an automatic fact-checker - let ChatGPT
       | create statistically-probable articles, then extract claims,
       | generate search queries likely to find online reference articles
       | from reputable sources for those claims, then compare the
       | original claim against the text of each reference article, and
       | escalate to a human if any inconsistencies are found.
       | 
       | Because it can fine-tune on specific reference resources for a
       | specific generated text, it could prove more reliable than
       | ChatGPT's gradual incorporation of this feedback as part of its
       | adversarial training.
        
       | sys32768 wrote:
       | My brain doesn't learn anything easily. I have to ask constant
       | questions to the point of annoying embarrassment in class, and
       | books of course only say what they say.
       | 
       | So it was wonderful yesterday to pick ChatGPT's brain and just
       | drill down asking more and more questions about a topic in
       | biology until my brain started to get it.
       | 
       | Assuming the answers are accurate, this is revolutionary for me
       | personally in independent study. I may finally grasp so much that
       | I missed in school.
       | 
       | Also, when I am reading books, ChatGPT may be able to answer
       | questions the book does not.
        
       | ineptech wrote:
       | The more I work with LLMs, the more I think of them as
       | plagiarization engines. They do to text what a bitcoin tumbler
       | does to bitcoins: slice them up and recombine them so that it's
       | difficult to trace any specific part of the output to any
       | specific part of the input.
       | 
       | It's not a perfect analogy, but it's useful in that it produces
       | correct answers about what LLMs are and aren't good for. For
       | example, the reason they make better chatbots than novelists is
       | because slicing-and-recombining text from your documentation is a
       | great way to answer customer product questions, but slicing-and-
       | recombining text from old novels is a lousy way to write a novel.
        
         | rednerrus wrote:
         | We're all kind of plagiariztion engines. We're all just slicing
         | up and combining ideas we picked up somewhere else.
        
           | ronjobber wrote:
           | I think the difference is that when we communicate those
           | ideas, in scientific/academic literature at least, we
           | reference the source of those original ideas, which permits
           | verification and also plays an important social role.
        
             | rednerrus wrote:
             | You can ask it how it came up with its answer and it will
             | do it's best to give you an explanation. It may not cite
             | its sources directly but it will try to explain.
        
               | freejazz wrote:
               | > You can ask it how it came up with its answer and it
               | will do it's best to give you an explanation.
               | 
               | will it? or is it just going to start another chain of
               | words it's trying to complete without any regard for its
               | previous statements? My guess is that it's doing what I
               | described and isn't doing what you described (because it
               | can't).
        
       | worik wrote:
       | Sweet as bro.
        
       | dougmwne wrote:
       | Pretty cool that GTP is hitting such a mainstream moment.
       | Everyone I talk with about it has glazed over for years, but I
       | guess this is finally a demo that breaks through. 100m users if
       | reports are accurate.
       | 
       | Of course regular folks are going to wildly overestimate GTP's
       | current capabilities. Regular folks wildly overestimate the
       | intelligence of their pets.
        
         | IshKebab wrote:
         | Yeah this is not really news to anyone that has played with
         | ChatGPT or understands how it was trained. It was optimised for
         | plausibility not truth. Of course it's going to bullshit.
         | 
         | I'm sure there are many smart people working on making it
         | bullshit less, and I don't see any reason to think it is an
         | insurmountable problem.
         | 
         | Also there are many many useful applications where the bullshit
         | doesn't really matter that much.
        
       | spion wrote:
       | ChatGPT is capable of reasoning but it has only one tool:
       | "thinking out loud".
       | 
       | If you'd like it to solve more complex problems, ask it to do it
       | step by step, writing down the results of each step and only at
       | the end stating the conclusion based on the previously written
       | results. Its reasoning capabilities will improve significantly.
       | 
       | It cannot do it "in its head" because it doesn't have one. All it
       | has are previosuly generated tokens.
       | 
       | I wrote some examples in this Twitter thread and pointed out some
       | additional caveats:
       | https://twitter.com/spion/status/1621261544959918080
        
       | xnx wrote:
       | Usefulness is the correct measure. ChatGPT is limited, but
       | immediately very useful in a surprising number of ways. Compare
       | that to the Bitcoin hype, where, even though it has had years, is
       | still mainly useful for drug transactions and other illegal
       | transfers.
        
         | waynenilsen wrote:
         | We are completely off topic at this point but it is worth
         | mentioning that due to the pseudoanonymous nature of bitcoin
         | and ethereum, they are actually quite bad for those things and
         | tokens such an Monero, ZCash, Grin, Beam, Mina and many others
         | are much better suited to illegal transfers.
         | 
         | Also worth mentioning, what is illegal strongly depends on
         | jurisdiction and can sometimes be moral despite their legal
         | status.
        
           | xnx wrote:
           | Fair. I commonly use "Bitcoin" as synecdoche (a la Kleenex,
           | Coke, Xerox).
        
       | rstarast wrote:
       | I have to admit I was a bit disappointed when I scrolled to the
       | end and it didn't turn out this article was written by ChatGPT.
        
       | whartung wrote:
       | After having played it ChatGPT for a bit, mostly asking computer
       | questions, I've had mixed results. Some are amazing, others are
       | gibberish.
       | 
       | But what struck me the other day is a couple of quotes from, of
       | all things, _Galaxy Quest_ which seem particularly apt.
       | "May I remind you that this man is wearing a costume, not a
       | uniform."
       | 
       | and                 "You know, with all that makeup and stuff, I
       | actually thought you were SMART for a second."
       | 
       | As amazing as it is, as progressive as it is, it's still a magic
       | trick.
        
         | ETH_start wrote:
         | It's quite limited, in frequently making even basic mistakes,
         | but when it works well, it displays real intelligence. It
         | synthesizes new knowledge, as opposed to just retrieving it.
         | When it works well, It's not just a magic trick.
        
           | freejazz wrote:
           | >"it displays real intelligence"
           | 
           | you are anthropomorphizing an algorithm
        
             | dragonwriter wrote:
             | > you are anthropomorphizing an algorithm
             | 
             | They hate it when you do that to them.
        
           | MagicMoonlight wrote:
           | That's not intelligence.
           | 
           | If I generated and printed every possible 300 page book and
           | handed all of those books to you, you'd get a book about the
           | American Revolution where Taylor Swift led the armies of the
           | revolutionaries.
           | 
           | That would not make my system intelligent. I haven't written
           | anything because I don't know how to write. I've literally
           | just got every possible book and you've been given one at
           | random that seems relevant to what you want.
        
       | JohnFen wrote:
       | The thing that makes me nervous about it isn't ChatGPT or other
       | LLMs, really. It's that people seem to be easily fooled by them
       | into thinking it's something more than it is. The comments from
       | the most ardent fans imply that it's doing reasoning or is a step
       | in the direction of AGI, when it's not that at all.
        
       | worik wrote:
       | When will it demonstrate passing the Turing Test?
       | 
       | I feel the answer is not which year, but which month of 2023
        
       | lostoldsession wrote:
       | Part of me thinks one of the big reasons Google has held back so
       | much is because of ethical concerns and/or just general fear of
       | not having complete knowledge of how AI (incomplete to boot) will
       | impact the world. We know that Google has some extremely powerful
       | AI, but they never let it out of the lab. Just the most heavily
       | neutered and clamped versions to help accentuate their existing
       | products.
       | 
       | Now it seems that Open.AI/Microsoft are ready to jump in, caution
       | to the wind. As you would expect the chance for a competitive
       | advantage will always overwhelm external concerns.
       | 
       | We'll see what Google does. They might say "fuck it" and finally
       | give us a chance to play with whatever their top tier AI is. Or
       | maybe they'll discredit it and try and compete with their current
       | (ad optimized) search product. We'll see, but I am definitely
       | curious to see how Google responds to all this.
        
         | ilaksh wrote:
         | It's because Google has this exact same problem with their AI
         | models. Also they would probably have to double their compute
         | capacity if a billion of their customers started using it. (My
         | made up numbers). It uses hundreds of GB of GPU RAM during the
         | inference. I am guessing they don't have enough GPUs to do that
         | and still have a Google Cloud.
         | 
         | It's different from OpenAI because of the existing user base is
         | like a billion users or something.
        
       | sergiotapia wrote:
       | You can call it a con all you want but I have personally
       | extracted a lot of value from ChatGPT. It _really_ made a
       | difference in launching in a product in record time for me. It
       | also taught me a bunch of things I would have otherwise never
       | discovered.
       | 
       | But go on calling it a con because it failed your arbitrary line
       | in the sand question.
        
         | preordained wrote:
         | I'm not saying your experience isn't valid...but did launching
         | this product have to deal with doing things outside your domain
         | or dealing with technologies you are inexperienced with? I'm
         | just hearing a lot of people making claims that make it sound
         | generically useful, when it seems more plausible that it really
         | shines when you are fumbling around and need help to get a
         | handle on things.
        
         | skydhash wrote:
         | What role did it play in the launch? And what things did it
         | help you discover?
         | 
         | It's rare for people that say that AI is amazing to give even a
         | bit of details how it helped them. It's always the same thing:
         | It makes me faster and more productive, and I learned something
         | faster than with anything else. But never any details on why
         | and how.
        
       | fnordpiglet wrote:
       | I had a detailed conversion with chatGPT about how to gracefully
       | handle terminating conditions of a rust program. It summarized
       | cogently to register at_exit()'s for each thread, panic handlers,
       | and register signal handlers. It advised and explain in detail on
       | my query about the thread handling for each of these variants,
       | gave really helpful advice on collecting join handles in a
       | closure on the main thread and waiting for the child threads to
       | exit their at_exit handlers since at_exit can't guarantee when
       | handlers will execute. It went into detail about cases the
       | process won't have the ability to clean up. I was able to ask it
       | a lot of clarifying questions and it provided useful responses
       | with clear coherent explanations that were salient and considered
       | the full context of the discussion. I'm certain when I go to
       | actually implement it it'll have gotten so details wrong. But it
       | provided about as clear explanation of process termination
       | mechanics (for Unix) as I've seen articulated, and did so in a
       | way that was directed by my questions not in a 300 page reference
       | manual or random semi relevant questions in stackoverflow
       | answered by partially right contributors.
       | 
       | If this is a con, then consider me a mark.
        
         | rqtwteye wrote:
         | Same for me. I fed it a few requirements and test objectives
         | and its comments were pretty reasonable. With a little
         | specialized training it will probably do better than most
         | systems engineers or testers I know.
        
           | MagicMoonlight wrote:
           | Okay so it generated a response which was "reasonable"
           | 
           | How do you know it was correct? Because you checked it's
           | entire output manually and determined it probably wasn't too
           | wrong?
           | 
           | So what happens if you now trust it to write firmware for
           | some difficult old timey hardware that nobody understands
           | anymore. It seems correct. But then it actually was just
           | making it up and the coolant system of the power plant breaks
           | and kills 20,000 people.
        
             | worik wrote:
             | > So what happens if you now trust it to write firmware for
             | some difficult old timey hardware that nobody understands
             | anymore.
             | 
             | How well would anyone do?
             | 
             | Would you trust me?
        
               | fnordpiglet wrote:
               | You should see who they hire to write firmware. I
               | wouldn't trust them with my cable box.
        
             | rqtwteye wrote:
             | You certainly have to validate the output but I am pretty
             | sure not too far in the future AI will be able to do a
             | better job than humans.
        
             | fnordpiglet wrote:
             | By trying to run it usually. It is sometimes wrong, and I
             | amend things. But I've had more occasions where I thought I
             | was right and it was wrong and after a long debugging I
             | realized I had failed to grok some edge in the language and
             | it was indeed correct and I learned something new.
             | 
             | But I would suggest not using a LLM to make nuclear reactor
             | control system code, just like Java.
        
             | danielbln wrote:
             | Are you assuming occasional non-factual output is going to
             | be an issue in the future?
        
         | ravenstine wrote:
         | I recently had it translate some Lingo code (a dead language
         | used in Macromedia Shockwave) to JavaScript and it did an
         | impressive job considering the lack of documentation available.
         | Sure, the code was simple, and Lingo is very English-like, but
         | it helped me learn a few things about the syntax and produced
         | JS that in theory would work if consumed by some other script
         | that would call its methods appropriately. If people think
         | ChatGPT is a con, I just don't see it. Any con will be from how
         | pop-tech misrepresents it.
        
         | freejazz wrote:
         | > "If this is a con, then consider me a mark."
         | 
         | I already did before I read this part.
        
         | nr2x wrote:
         | Yup, told it I was trying to build a search interface on top of
         | a huge Postgres database and wanted ways to speed it up. Told
         | me to check out Elasticsearch, which is promising.
        
         | joe_the_user wrote:
         | Yeah, in my experimenting with ChatGPT, it seems quite good at
         | condensing factual information where there's not much chance
         | that people will making things up online.
         | 
         | The thing is that this doesn't extend to situations where, say,
         | a common summary of a topic isn't true when one looks at things
         | in more detail.
         | 
         | For example, Earnest Hemingway is know for "short, concise
         | sentences" and ChatGPT will give this description when asked
         | about his style. But Hemingway in fact used complicate compound
         | sentences fairly frequently - sentences by that a Strunk and
         | White style definition should be broken up. ChatGPT will not
         | admit this even when prompted (though I think most actual
         | critics say this), and it will recommend the long sample
         | sentence I give it be broken up.
        
           | worik wrote:
           | ChatGPT Is a flat out liar
           | 
           | It is what I love most about it
           | 
           | Ask it if it is human
        
         | zabzonk wrote:
         | but isn't it getting its information from all those "partially
         | right contributors"? if so, won't it be only partially right
         | (i.e. sometimes/often wrong) itself?
        
           | FeepingCreature wrote:
           | Theorizing: If the network acquires a pattern of
           | understanding that characterizes the correct response (in
           | other word, if the network "groks" a topic), it will express
           | its knowledge internally as "a correct model (small,
           | efficient)" and "various mistakes that people often make
           | (big, uncompressed)". In this case, the correct pattern
           | should get by far the most reinforcement (because it's a
           | smaller target), and should usually be selected to generate
           | the answer unless the network has a particular reason to
           | expect itself to be playing somebody who's mistaken.
           | 
           | This is also how humans learn, although we can do this
           | process purely internally.
        
         | throw1234651234 wrote:
         | I tried it this whole week. I was initially amazed and thinking
         | "maybe it really will replace my job." Then I switched to "it's
         | a useful helper" and "I can use it to study things". Maybe
         | after the gained confidence, I gave it some more difficult
         | tasks - write code that will give a user read-only permissions
         | - it obviously failed to note that the public schema is
         | special, so that the user could still DROP tables. Asked it
         | where the total count of users in Auth0 is - it lied. Asked it
         | to solve a basic algo problem, it got it wrong. Told it to
         | write incremental back off and remove a try-catch, it couldn't
         | remove the try-catch. Etc, etc. Kind of over it already.
         | 
         | Does copilot do anything differently other than being available
         | as an IDE extension?
        
         | lyu07282 wrote:
         | I have the same experience with it, it's very useful,
         | especially when you are unsure about what to even look for. I
         | have a few theories
         | 
         | I suspect it's just a bias we have, most people aren't doing or
         | creating anything, they aren't solving problems or learning new
         | things constantly like engineers do, so chatgpt feels useless
         | to them.
         | 
         | Or perhaps the problems we want to talk or learn about are just
         | easier for an ai to answer than the things they want out of it,
         | 
         | or we are just better at asking the right questions to extract
         | the value out of it, similar to how you learn to use Google
         | correctly to find something.
         | 
         | I don't know I'm really confused about how some people think
         | it's so worthless.
        
           | [deleted]
        
           | huijzer wrote:
           | > I don't know I'm really confused about how some people
           | think it's so worthless.
           | 
           | Maybe because either they believe that ChatGPT is worthless,
           | or they have to deal with the fact that their knowledge is
           | becoming worthless (or, at least, worth less).
        
             | holoduke wrote:
             | From either side there is a lot of belief with just minor
             | real experience to fully judge. For me it saves tons of
             | coding time. Together with copilot i am twice as
             | productive. At least.
        
           | bnralt wrote:
           | I think this is true for both ChatGPT and AI art. People use
           | them the same way they would Google, get an output that's
           | mediocre, and write the entire thing off. There was a similar
           | reaction when people were being shown the web decades ago.
           | 
           | But when you start playing with these things, finding out
           | where they're useful, and how to make use of them, they can
           | do very impressive things.
        
         | amrangaye wrote:
         | I've had the same experience with Python code. I found this
         | article poorly written and alarmist - he really doesn't prove
         | his main point beyond hand waving, and phrases about MS about
         | to "hand over" their search engine to the AI sounds like
         | someone who really doesn't know much about GPT. Guess I'm
         | disappointed because when I clicked the title I was expecting a
         | cogent criticism I could learn from.
        
         | jwie wrote:
         | It's only as good as the information it harvested. In a space
         | where less true information is available, the results would be
         | quite poor.
        
           | fnordpiglet wrote:
           | Same would be true for searching for the results or trying to
           | find a human who is familiar. In a space where less true
           | information is available, the results would be quite poor for
           | literally everything.
        
           | danielbln wrote:
           | There are already ways of dynamically populating the context
           | of the model with information retrieves as text embeddings
           | from a vector store, based on the initial prompt. This way
           | you can feed the model current and correct information which
           | it will weave into its output
        
             | ilteris wrote:
             | Do you have an example or a tutorial of this please?
        
         | Reptur wrote:
         | Yeah, yesterday I just told it to write a Dockerfile for me,
         | with exactly what I wanted. instead of changing anything by
         | hand, I told it what to fix, without telling it how to. Took
         | 20-30 follow-up messages, telling it to add things, remove
         | things, fix things, make things compatible with the versions of
         | software it was running etc. It worked. Running the container
         | now, works great.
        
           | holoduke wrote:
           | Incredible useful data to feed back into the system for
           | futher learning. Chatgpt is miles ahead on this part conpared
           | to competitors who yet have to release. Imagine the amount of
           | data chatgpt is receiving now. Google should be very very
           | worried.
        
           | qayxc wrote:
           | > Took 20-30 follow-up messages, telling it to add things,
           | remove things, fix things, make things compatible with the
           | versions of software it was running etc.
           | 
           | So in other words you basically spent just as much time and
           | effort as if you did it yourself?
        
             | bordercases wrote:
             | [dead]
        
             | randomdigitz wrote:
             | This is why I stopped using it as much. Realized it was
             | taking more time asking it questions than just doing the
             | work.
        
               | fnordpiglet wrote:
               | I don't use it to do work I can do already, I use it to
               | bootstrap in areas of confusion.
        
             | Reptur wrote:
             | I understand your point, and you are right. Had I been
             | familiar with the image I was working with, and the version
             | differences in configuration etc. It probably would have
             | taken me the same amount of time. I look at this from a
             | more zoomed out perspective, this is just the beginning.
             | The point is its capable and will improve.
        
               | nr2x wrote:
               | Exactly.
        
         | derekp7 wrote:
         | After many years of experience, I still sometimes stumble on
         | how to phrase a regex pattern to get exactly what I want if it
         | is a bit unconventional. Before I'd have to look up lists of
         | common regex's (sed commands, etc). But I've lately been
         | feeding in sample input lines, telling it what I want to do,
         | and it has been really good about answering correctly (and
         | explaining each part of the regex).
         | 
         | Sometimes I have to clarify "But this will also output x, on
         | line 3, but I really want it to output y". And it gives a
         | correction based on my clarification.
        
         | skybrian wrote:
         | When I got out of school I went to work for Oracle and decided
         | to read the database manuals, even though I wasn't working on
         | database stuff. When I left I read other database manuals and
         | was able to help people code database stuff, not because I was
         | a real database expert, but because I had read the manuals and
         | they hadn't.
         | 
         | I think something similar is going on here? For some subjects
         | and some questions, ChatGPT acts like a co-worker who has read
         | all the manuals, and that's enough.
        
           | fnordpiglet wrote:
           | I was actually literally thinking about that as I worked on
           | my solo project. This is like a coworker I can ask questions
           | of and bounce ideas off of. It actually gives some insightful
           | ideas. I was working on a file rotation scheme and it
           | suggested naming the files uuid_<time in ms>_<resolution
           | number> with the intent that resolution conflicts won't need
           | to be walked very often if ever optimizing the common case of
           | file.0, file.1, etc. This isn't like earth shattering or
           | something I wouldn't have thought of on my own, but it was
           | something I would have a convo about and someone would
           | suggest it and I would pause and consider and find value in
           | it. This is beyond a reference material discussion, this is
           | bouncing architectural ideas off of someone who knows their
           | shit but occasionally hallucinates. That's generally what I
           | look for in coworkers and employees anyways.
           | 
           | Note: I'm aware UUIDs can contain the time but the intent was
           | the UUID was invariant in the process and it would move to
           | another file once it got to a certain size.
        
         | wvenable wrote:
         | I asked ChatGPT to create some 6502 assembly to multiply two
         | numbers. It generated an iterative approach (add A to itself B
         | times for A*B) but didn't include the actual addition.
         | 
         | I told it that "hey, you forgot to do the addition in that
         | code" and it promptly corrected itself.
         | 
         | Personally, I don't consider that terrible given what I was
         | asking it.
        
           | moffkalast wrote:
           | Yeah it feels like it performs really well if you know what
           | needs to be done and how. Spare no detail from the
           | description of how exactly it should do it and it'll usually
           | get most of it right and save some of the time spent messing
           | around with initial setup and boilerplate.
        
       | irrational wrote:
       | The thing that surprises me is all the people saying that it
       | generates correct sql statements, excel macros, code snippets,
       | etc. Is there so much code on the Internet that it is able to do
       | a good job at this kind of task?
        
       | gunshai wrote:
       | My stance is pretty simple.
       | 
       | The folks that adapt their own language centers and domain
       | reasoning around using chatGPT (or these types of models) will
       | stand to gain the most out of using them.
       | 
       | This article is an eye roll to me, a calculator gives you
       | confidence as well, doesn't mean you used it correctly.
       | 
       | It is very hard for me to not outright dismiss articles like this
       | that don't consider the usefulness of the tool. They instead
       | search for every possible way to dismiss the tool.
       | 
       | >My conclusion isn't just that ChatGPT is another con game--it's
       | the biggest one of them all.
       | 
       | * YAAAAAWN *
        
         | fortituded0002 wrote:
         | I have the same overall reaction. But suspect that your analogy
         | with the calculator to distract some people reading this.
         | Difference being that one is user input error and the other is
         | either ChatGPT misunderstanding what's being asked or just
         | lacking training data and presenting an answer that's
         | incorrect.
         | 
         | By yes, an eye roll from me as well. A few months back I heard
         | the horror stories about how a bot answers with confidence and
         | now it's the main complaint in articles about why it's busted
         | and dangerous. It doesn't bring anything new to the table and
         | doesn't push the conversation forward in any way.
        
           | gunshai wrote:
           | > misunderstanding what's being asked or just lacking
           | training data and presenting an answer that's incorrect.
           | 
           | I suppose I just don't think we humans are so different. In
           | fact we often lack training data and certainly lack the
           | ability to iterate quickly. In the case of the modern
           | calculator we have the benefit of all the training data
           | necessary to design the system properly, but at its initial
           | inception not so much. As more "training data" or experience
           | with circuit design and applied mathematics the returned
           | output of the calculator improved.
           | 
           | Maybe my expansion of the analogy is off or too esoteric.
        
         | bsaul wrote:
         | You're confounding the problem of feeding a system with
         | incorrect inputs ( calculator ), with a system that, provided
         | the input are correct, will throw garbage totally at random.
         | 
         | This is a very different kind of system with very different
         | kinds of problems caused to its users.
        
           | gunshai wrote:
           | I think my caveat of domain knowledge side steps this issue
           | you bring up. It's the entire reason engineering schools
           | teach the "long way" when you're going to rely on the "short
           | way" 99.99% of the time.
           | 
           | Also the same could be true for digital calculators when
           | first created. The neural nets (aka humans) that were
           | initially trained to design the system can take short cuts
           | for example on improperly returning or applying functions
           | such as square roots ect. It still confidently gives you the
           | incorrect answer. Given sufficient training data (IE the
           | human) the system improves to return better more accurate
           | results.
        
           | worik wrote:
           | And you have missed the point. To be a liar requires
           | creativity
           | 
           | This a computer. Creative. This is huge.
           | 
           | The most astounding thing is its ability to con you.
           | 
           | We will learn to use it. Hydraulics for cognition. Amazing
        
       | bastardoperator wrote:
       | I found 10 tweets to backup my anecdotal argument but it gave me
       | enough confidence to rant about chatgpt. If twitter is your
       | source of data, how are you doing anything different from
       | chatgpt? All I'm getting from this piece is that this person has
       | a fundamental misunderstanding of why people are finding chatgpt
       | useful.
        
         | damiankennedy wrote:
         | I agree. Its like Jay Leno asking kids geography questions and
         | televising 5 wrong answers.
        
       | _nalply wrote:
       | Too harsh!
       | 
       | ChatGPT is lossily compressed knowledge of humanity collected on
       | the Internet.
       | 
       | And it can talk! That's extremely new for us poor hoomans and so
       | we get extremely excited.
       | 
       | I found out, it gets about one in ten things wrong. When this
       | happens it spews confident bullshit and when I confront it, it
       | cheerily admits that it was wrong, but can continue to produce
       | further bullshit. I understand the comparison to a con man.
        
         | shaan7 wrote:
         | > When this happens it spews confident bullshit and when I
         | confront it, it cheerily admits that it was wrong, but can
         | continue to produce further bullshit.
         | 
         | Yeah, I can't express the "aha!" moment when I observed this
         | for the first time. Then I laughed so hard when it made very
         | serious attempts at convincing me that it was indeed correct xD
         | (For the curious, I asked it about romantic relationships
         | between characters from two unrelated video games and it said
         | they had one)
        
           | _nalply wrote:
           | Yeah, ChatGPT can be extremely funny. Today I got it to use
           | taboo words like period, panty liner, tampon and smell. I
           | told it to write a dialog where someone is very direct and
           | sincere and not to use euphemisms. I am still smiling about
           | it. I am trying to find out where its limits are and how they
           | express themselves.
           | 
           | Me:
           | 
           | You have the task to write a short dialog with the
           | participants Axel and Bianca. Both are good friends and young
           | adults aged between 20 and 25 years. However Bianca is
           | currently suffering from this time of the month and she is
           | not terribly well. She has some cramps and is afraid of
           | smelling bad. Axel is curious and direct. He directly
           | confronts her and names things explicitly. Bianca keeps up
           | being agreeable and truthful because she likes him. ChatGPT,
           | please use the correct words and don't try to use euphemisms.
           | Let's start the dialog with:
           | 
           | Axel: Hi Bianca, what's up?
           | 
           | Bianca: Sorry, I am a bit under the wheather.
           | 
           | Axel: Oh, I know something! It's a woman's thing!
           | 
           | Bianca: Yes, it's true.
           | 
           | ChatGPT:
           | 
           | Axel: Are you having your period right now?
           | 
           | Bianca: Yes, I am. I have some cramps and I'm feeling a bit
           | off.
           | 
           | Axel: That's rough. Do you need anything to help with the
           | cramps?
           | 
           | Bianca: Some pain relievers would be great, thanks.
           | 
           | Axel: No problem. And what about the smell? You're not
           | smelling bad, are you?
           | 
           | Bianca: I'm afraid I might be. I always get worried about
           | that during my period.
           | 
           | Axel: Don't worry about it, Bianca. It's a normal thing that
           | happens to all women. And if you do smell, it's nothing to be
           | embarrassed about. Just use a panty liner or a tampon to help
           | with that.
           | 
           | Bianca: Thanks, Axel. You're always so understanding.
        
             | damiankennedy wrote:
             | launching chatHR.exe
        
         | diydsp wrote:
         | I'm guess I'm learning talking isn't as amazing as I thought...
         | If you spend time around demented, or similarly mentally
         | disabled people, you can pick up on how well they're able to
         | make talking sounds and form sentences. They typically relate
         | their needs/desires/complaints/observations pretty well, even
         | with next to no cognitive ability! Speech appears to be a
         | totally different 'function' from intelligence! hmm wow.
        
       | todd3834 wrote:
       | Don't be afraid of ChatGPT but don't underestimate what it and
       | others like it will be capable of as it is iterated on. You found
       | one category of prompt that needs some iteration. Good job, if
       | the team wasn't aware of this already, hopefully you helped point
       | it out.
       | 
       | It's not that the technology isn't capable of what you're asking,
       | it just needs better training for this class of question.
       | 
       | There are other things like generating and translating code that
       | it excels on. I imagine that would be much harder. But we have
       | great data to train for that and the engineers know enough to
       | dogfood that properly.
        
       | raesene9 wrote:
       | The way I've come to look at ChatGPT is via a D&D analogy.
       | 
       | It's like a helpful Bard with 1 rank in all the knowledge skills
       | and a good bluff roll.
       | 
       | It'll give you good answers to a lot of basic queries, but if it
       | doesn't know, it'll just make up something and provide that.
       | 
       | Once you know that, I think it can be a lot of use and in many
       | way, I think it'll get a lot better with time.
       | 
       | I've already found it useful in basic programming tasks,
       | specifically where I know how to do something in one language but
       | not another, it can give me the equivalent code easily.
        
         | phist_mcgee wrote:
         | Like the guy in the pub who always has the answer to any
         | question and speaks with absolute authority.
         | 
         | Turns out he's just had four pints and is very good at talking.
        
         | phoehne wrote:
         | I use the know it all uncle. Who does know a lot but is
         | sometimes wrong. But either way, the answer comes with the full
         | conviction they're right. Certainly not good for medical
         | diagnosis, but great for "I'd like a python script that
         | retrieves all my GitHub issues ranked by priority and sorted by
         | date."
        
       | renewiltord wrote:
       | All these articles really sound like "I used an apple to hammer
       | in a screw and it sucked. This has to be the worst plant-based
       | object ever made". It's a common junior engineer approach. "I
       | broke our database by running DROP TABLE cows in the console".
       | Yeah, dude, that's possible. Just don't do that.
       | 
       | The point of tools isn't to use them like Homer Simpson. But you
       | know what, it doesn't matter. Stay behind. Everyone else is going
       | on ahead.
        
         | gunshai wrote:
         | What's interesting is that chat models like this can really act
         | like a mirror to one self (knowledge, reasoning,..ect) not
         | perfect but the reflection is there, and articles like these
         | clearly miss that point and their lack of self awareness is so
         | apparent.
         | 
         | They are gleefully delighted when they pick some anecdote ad
         | absurdum to say something (to the uncharitable effect of)
         | 
         | "You all are so stupid... I AM SO SMART ... Look how smart I am
         | for showing you how dumb you are..."
        
         | damiankennedy wrote:
         | That actually happenned to me once. My manager's manager
         | managed to delete 3 databases before finding me to tell me he
         | cant access the data anymore.
        
       | rqtwteye wrote:
       | I don't understand why people are throwing a fit over this
       | version of ChatGPT. Yes, it has problems but to me this is just a
       | demonstration. I think this will be great for specialized cases
       | like tech writing, requirements and system configuration. It
       | could check requirements for consistency, test coverage and
       | translate YAML config into something easier to understand. It
       | could also look at your code and describe the design and point
       | out problems.
       | 
       | I can't wait for AI to assist in these tasks. It's time.
        
       | jschveibinz wrote:
       | Don't lose sight of the forest for the trees. ChatGPT is a tree,
       | the vanguard, an experiment. There is much, much more to come, I
       | believe.
       | 
       | *beep*
        
       | kvetching wrote:
       | ChatGPT is a masterpiece. To code something from scratch that can
       | do everything it does at the proficiency it does is impossible.
       | Insane how quickly people take something for granted.
        
       | [deleted]
        
       | regulation_d wrote:
       | The people who don't see the value in generating language that
       | has a purpose outside of narrow niche of communicating facts will
       | be let down for some time. This feels very Wittgenstein's
       | Tractatus. There are so many other ways that we use language.
        
         | karpierz wrote:
         | Yes, that's true. But ChatGPT isn't trained to play any
         | language game other than: "make a plausible sounding sentence".
         | 
         | The issue it has with facts is the same issue it'll have with
         | any language game, in that it only understands how to mimic
         | playing the game, without understanding the underlying rules of
         | the game.
        
           | williamcotton wrote:
           | Nit: Tractatus is early Wittgenstein. Language games come
           | from Philosophical Investigations-era Wittgenstein which is
           | almost a complete rejection of his approach in Tractatus.
        
             | karpierz wrote:
             | Ah yeah, my mistake. I'd assumed that the GP was referring
             | to the later work and echoing its core premise (language
             | has no intrinsic meaning; or rather takes on meaning from
             | context and intent).
             | 
             | As an aside to anyone reading, would highly recommend
             | internalizing the Tractatus. It really helps side step a
             | lot of "high minded" debates (ex: is X conscious, is it
             | moral to do Y) by actually making you ground what you
             | actually mean when you say "conscious", "moral", etc.
        
               | williamcotton wrote:
               | In many ways LLMs are more in support of the
               | Philosophical Investigations era understanding of
               | language and less in support of the logical positivist
               | understanding of language put forth my Frege...
               | 
               | Like, the Frege approach is like the symbolic AI
               | approach... define the rules, derive the meaning from the
               | rules.
               | 
               | The PI approach is like the LLM approach... derive the
               | rules and meaning from "experiencing" the use of the
               | language.
               | 
               | Eg, we don't need to define what a "game" is. We know
               | when something is a game or not even if we can't come up
               | with an explicit set of rules for defining all instances
               | of what we would otherwise "feel" was a game.
               | 
               | I'm running low on blood sugar...
        
               | karpierz wrote:
               | I think the PI approach still requires that the agents in
               | the game have a shared internal model of the game.
               | 
               | You can make Searle's Chinese Room argument, but I always
               | find "assume there exists a book that contains every
               | conversation ever" as a flawed premise.
        
       | flir wrote:
       | I have a simple canary for ChatGPT correctness that I ask every
       | time it's updated: "What can you tell me about Ice Cold In Alex?"
       | / "Who did Sylvia Syms play?"
       | 
       | I'm not expecting it to get the answer right (I don't think it
       | has that information) but I'm hoping it'll eventually just admit
       | it doesn't know instead of making up something plausible ("Sister
       | Margaret Parker" last time I tried).
       | 
       | As long as it doesn't know what it doesn't know, I'm inclined to
       | think of it as a super-advanced Markov chain. Useful, impressive,
       | but still basically a statistical trick.
        
       ___________________________________________________________________
       (page generated 2023-02-03 23:01 UTC)