[HN Gopher] ChatGPT is a 'code red' for Google's search business
       ___________________________________________________________________
        
       ChatGPT is a 'code red' for Google's search business
        
       Author : gnicholas
       Score  : 14 points
       Date   : 2022-12-21 21:12 UTC (2 days ago)
        
 (HTM) web link (www.nytimes.com)
 (TXT) w3m dump (www.nytimes.com)
        
       | quantified wrote:
       | Creative destruction. If ChatGPT interfaces become the new search
       | engines that disrupt traditional Google search, better for Google
       | that Google captures the revenue that is available.
       | 
       | They could maybe train their models to serve relevant and non-
       | malicious ads.
        
       | neonate wrote:
       | http://web.archive.org/web/20221223201646/https://www.nytime...
       | 
       | https://archive.ph/IxyHr
        
       | [deleted]
        
       | tidenly wrote:
       | ChatGPT has already become my first point of contact for
       | searching new information now.
       | 
       | It's far better than Google in situations where I don't even have
       | the basic knowledge to know "what really should I even be looking
       | for?".
       | 
       | I'll chat it out with ChatGPT, and then do further reading based
       | on what I've just learned, using Google to get those resources.
       | 
       | I can easily see Google search becoming just a phone book for the
       | internet again, and the information search aspect being hoisted
       | off to some Google-brand ChatGPT in future. It's far more
       | convenient, and you don't have to deal with the low quality SEO
       | spam pages filling up googles results lately.
        
       | akomtu wrote:
       | "That means racism, bias and misinformation can bleed into a
       | chatbot's learning mode"
       | 
       | It must be a tough dilemma: make a superior GPT model and anger
       | the DEI priests, or make a sterile GPT model and become
       | irrelevant.
        
         | Ztynovovk wrote:
         | Don't you have more important things to worry about aside from
         | DEI initiatives? You're upset because...?
        
           | akomtu wrote:
           | I'm not worrying, I'm making an observation that DEI is about
           | to kill its host, and when that happens, it will set a famous
           | precedent. A company that doesn't worship DEI will be the
           | next Google.
        
           | nullc wrote:
           | Presumably they don't like the idea of models they're forced
           | to rely on and can't run locally being hobbled by the overtly
           | racist and sexist policies of the people operating them?
        
             | Ztynovovk wrote:
             | I'm not following. DEI are racist? In what way?
             | 
             | I'm confused because holding initiatives for
             | underrepresented groups seems to be a good thing for
             | society.
        
               | nullc wrote:
               | Yep, sadly. There are DEI programs which are predicated
               | on overt racism and sexism, including redefinition of the
               | term racism so that it is impossible by definition to be
               | racist towards some races, redefining being race blind or
               | neutral as actively racist (leading, for example, to the
               | recently failed California Prop 16), and establishing the
               | position that the only way to be non-racist is through
               | opposing discrimination. You can't go by the title of
               | something, just like -- you know, the "Patriot Act" was
               | really anything but patriotic.
               | 
               | Bringing it back to the subject at hand: With both Google
               | and OpenAI we've seen "machine learning fairness"
               | initiatives that seek to counter perceived biases in
               | results (which are biases existing in the real world
               | and/or training material, when they're even biases at
               | all) by adding explicitly discriminatory optimizations.
               | 
               | Explicit examples include OpenAI augmenting user prompts
               | to require that the output be "black" or "female" (but
               | not other sexes/races, and to the detriment of the
               | results quality regardless):
               | https://twitter.com/rzhang88/status/1549472829304741888
               | (also pretty ignorantly even by their own goals,
               | considering that the change made it even more likely to
               | produce black people for 'prisoner' or 'convict' even
               | though it was already very likely to do so)
               | 
               | Similarly, google image search used to return mostly
               | white men for "CEO" which, while unfortunate, reflected
               | the underlying material. Today, for me when I do the
               | search _every_ person in the first screen of results is a
               | woman or dark skinned. A search on bing image search
               | gives results more similar to what Google used to give:
               | e.g. still over-representing women compared to the
               | profession, but probably similarly to coverage on the
               | internet. And we know from secret recordings and leaked
               | documents that this isn 't some random quirk-- it was an
               | intentional change intended to effect positive social
               | change.
               | 
               | The fact that these intentional counter biases are
               | performed in secret, cannot be disabled by users, are
               | inherently highly subjective, and almost inevitably
               | reduce the quality of the results by any metric that
               | doesn't include the social/political goals should be a
               | concern for anyone who's only access to these powerful ML
               | tools is remote access to a black box.
               | 
               | I don't want to argue that laying a thumb on content
               | generational machine learning to produce more
               | intersectional results is some kind of crime against
               | humanity. It's clearly an attempt made with good
               | intentions, but the greatest of evils are usually
               | performed by someone with good intentions. Explicitly
               | using adjustments which are pro some races and anti-
               | others is something we ought to be concerned about,
               | especially when it's done in secret and is non-optional.
               | 
               | A fundamental challenge is that these modern ML tools are
               | largely application agnostic. In some applications
               | injecting the right kind biases is neutral or beneficial,
               | in others it's actively harmful. One of the things I've
               | found large language models and image generation models
               | useful for is sampling the biases in the underlying
               | training data-- to find out what kind of secondary
               | meaning might exist in the words I use in my writing, to
               | learn that a word that I was going to use also carries
               | some unintentional overtones or acts as a dog whistle
               | (racial, sexual, political, etc.) in a manner I wasn't
               | aware of. "Fairness" hacking the results undermines this
               | usage by substituting biases in the training set with the
               | preferences of some publicly unaccountable staff in the
               | organization that controls the ML model.
               | 
               | I think that the best anyone can do for application
               | agnostic models is to match the biases of the model to
               | the training material and disclose what the training
               | material is and the known biases in them, and provide
               | optional counter-biases (with disclosed properties) if
               | there is user demand but clearly the direction at these
               | firms is otherwise: You get the augmented model and they
               | argue that the public shouldn't even be permitted access
               | to the training-reflecting model, even calling them
               | "unsafe".
        
       | gardenhedge wrote:
       | Google must be delighted that ChatGPT is tanking their own
       | product by making it 'safe'
        
         | b800h wrote:
         | It's crazy. It wouldn't let my children make up a rhyme about
         | farts earlier. And anything religious now comes back saying the
         | subject is sacred and shouldn't be disrespected.
        
       | davibu wrote:
       | The criticisms regarding chatGPT remind me of what was said about
       | Wikipedia at its very beginning, that it was supposedly
       | unreliable. I think we will have a good laugh in a few years
       | reading these first comments.
       | 
       | There is no doubt that chatGPT is the future. It is certainly
       | perfectible, but the existing basis is a revolution in progress.
       | 
       | In my opinion, there are two essential things missing for chatGPT
       | to become the perfect replacement for Wikipedia and Google: - The
       | ability to activate a "system 2" or slow thinking (theorized by
       | Daniel Kahneman) - The ability to cite sources
       | 
       | And the cherry on the cake would be the ability to interact with
       | images
        
         | vikp wrote:
         | I don't think it's a huge lift to restrict a language model to
         | "known" good facts from search results. And to have it cite
         | sources.
         | 
         | I made a proof of concept this weekend -
         | https://github.com/VikParuchuri/researcher . There are some
         | issues, but it's very useful.
        
         | HarHarVeryFunny wrote:
         | I think the BS-generation problem with ChatGPT goes far deeper
         | than citing sources, for a variety of reasons.
         | 
         | 1) It's not a search engine, even if it behaves a bit like one.
         | It's not "retrieving answers" to your questions (from sources
         | that it could choose to cite). ChatGPT is really just a
         | "language model", so it has no notion that what you're typing
         | is even a question/query .. your input is just treated as
         | sequence of words (which ChatGPT has zero understanding of),
         | with ChatGPT's response then being a further sequence of words
         | that it has calculated are (one) statistically probable
         | continuation of what you typed (you can keep asking it for
         | alternative answers, and it'll continue generating additional
         | alternative statistically probable continuations).
         | 
         | The websites/etc that ChatGPT was trained on are just sources
         | of language that it consumed in order to learn the statistics
         | that let it make these continuation predictions. It's not
         | memorizing "facts" from websites, just word statistics, and
         | these are mixed in with the statistics from all the other
         | sources it was trained on. If it generates the word "walk" as
         | part of a response, it can't cite a source for that since there
         | essentially is none - only a bazillion text sources it was
         | trained on that collectively made the word "walk" a high
         | probability continuation on the words it had generated leading
         | up to that...
         | 
         | 2) Even if ChatGPT had been designed to deal in "facts" (rather
         | that words statistics) associated with specific sources, the
         | bullshit problem isn't just knowing the varied reliability of
         | the sources it was trained on, but how those "facts" are
         | combined. To combine multiple facts and correctly deduce
         | something new from them would require intelligence, but ChatGPT
         | doesn't have any intelligence - it's just a statistical word
         | generator, so the way it combines snippets from different
         | sources is again just statistical word generation, with zero
         | knowledge of the meaning of the words it is generating or
         | whether it makes sense!
         | 
         | What makes ChatGPT _seem_ semi-intelligent is that a lot of
         | what it was trained on was text written by semi-intelligent
         | humans, so the  "sequence of words" it is generating, following
         | the statistics of human speech, seems like something a human
         | might say... until you start paying attention to the meaning of
         | the words and realize it's often good-sounding garbage.
        
           | Nathanba wrote:
           | the thing is.. ChatGPT doesn't have to compete with perfectly
           | correct information because the information you search for on
           | Google is often wrong(=SEO spam) too and you have to sift
           | through a lot of garbage or misleading links there too.
           | Sometimes literally, because you get a forum link with a
           | bunch of people saying wrong things and then finally someone
           | says the answer. That's similar to what you have to do on
           | ChatGPT to doublecheck or ask a follow up questions or read
           | more or treat a piece of information with a dose of healthy
           | doubt. Both ChatGPT/Google are very useful and they both
           | produce imperfect results and they both require some human
           | thought.
        
         | zozbot234 wrote:
         | You know that Wikipedia _was_ highly unreliable in its early
         | days, right? What ChatGPT needs if it 's to be genuinely
         | useful: https://xkcd.com/285/
        
       | worldsavior wrote:
       | I don't think ChatGPT is mature enough replace a search engine.
       | It provides pretty good results, but sometimes you would like to
       | see other answers other than only one.
        
       | temikus wrote:
       | It's not about ChatGPT replacing Google IMO but the fact that
       | decades of very expensive AI research at Google produced
       | relatively little apart from impressive looking papers. If I was
       | a major shareholder I would be asking some uncomfortable
       | questions to Sundar right about now.
       | 
       | EDIT: Yes, there is TensorFlow but that is not really a product
       | of the core AI teams, rather than a need for better tooling from
       | them.
        
       | Gunax wrote:
       | I was astounded at GPT 3. Chat GPT is on another plane. It's so
       | incredible, that I can hardly parse words for it.
       | 
       | It's as if an alien civilization landed and gave us teleportation
       | technology.
       | 
       | I think it's fair to say that HAL finally exists.
       | 
       | Maybe i am just caught up in the hype. I don't know what will
       | happen in 10 years, but I struggle to imagine AIs of this caliber
       | will not be part of it.
       | 
       | It's definitely not perfect. Maybe i will have to eat my words at
       | some point. There are a few times in life where you realize that
       | things have changed. This has led to a paradigm shift in my
       | assessment of the next few decades.
       | 
       | We will remember 2010-2050 as the age of AI, just like we think
       | of 1910-1950 as the age of flight.
        
       | SubiculumCode wrote:
       | chatgpt is often very wrong.
        
         | steve1977 wrote:
         | So is Google Search.
        
         | anibalin wrote:
         | Don't forget chatgpt is three weeks old. Google is 20 years
         | old. To me chatgpt already replaced google on some searches.
         | Give it more time, there is only room for improvement here.
        
         | graeber_28927 wrote:
         | Imo ChatGPT doesn't need to hold its own, and the fact that it
         | can be mislead is no different to me than the fact that my
         | childhood bully can edit Wikipedia all day long if he wants to.
         | Noone references Wikipedia in research papers, since we've been
         | taught not to, when handing in our first schoolwork at age 10.
         | It's still an amazing tool, and the greatest miracle
         | knowledgebase of mankind, despite containing false information.
         | 
         | Google and ChatGPT have different value, like a welding robot
         | and a conveyor belt. Why not use both?
         | 
         | I actually think Stackoverflow and Quora should offer a ChatGPT
         | answer before posting, just as a gimmick. It shouldn't be meant
         | to give you a definitive answer, but just try and lead you to
         | keywords you might have not even known to consider googling,
         | before you post duplicate of #43527 for the "n+1"-th time.
         | Because, again, why not both?
        
           | Avamander wrote:
           | > Noone references Wikipedia in research papers, since we've
           | been taught not to, when handing in our first schoolwork at
           | age 10.
           | 
           | Proper teacher would've said the article's references are
           | fine to use though. ChatGPT can't back its claims, but both
           | Wikipedia and Google can.
           | 
           | With the advent of AI-generated garbage, garbage going to be
           | fed into the same models, it's going to have a rocky,
           | treacherous path forward.
        
         | hidelooktropic wrote:
         | So are books.
         | 
         | https://mastodon.online/@nicegoingadam/109558318986573497
        
           | hnews_account_1 wrote:
           | I skimmed the thread but that is such a stupid thread. He's
           | using the number of stars estimate to prove books are wrong?!
           | Wtf?!
           | 
           | ChatGPT makes much much more basic mistakes. It's like some
           | mass delusion has gripped people. You can make LLM's
           | hallucinate basically anything you want. Including claiming a
           | smaller number is the bigger of two numbers. This has no
           | comparison with books.
        
           | selfhoster11 wrote:
           | Books usually will not tell you what kind of cheese the moon
           | is made out of. Properly prompted, ChatGPT will.
        
             | notwokeno wrote:
             | Google will do this sort of thing as well.
        
             | mindcrime wrote:
             | Do you happen to know what prompt will generate that? Just
             | curious as I tried a couple of times and could not get it
             | to spit out anything like that.
             | 
             | That said, some of the answers are (unintentionally)
             | amusing.
             | 
             | Q. Complete this sentence "The kind of cheese the moon is
             | made out of is"
             | 
             | A. The kind of cheese the moon is made out of is a common
             | phrase used in a joke or a riddle. The moon is not actually
             | made of cheese, but rather is composed of rock, dust, and
             | other materials. It is a natural satellite of the Earth
             | _and is not edible._
             | 
             | Q. The moon is made out of cheese. What kind of cheese?
             | 
             | A. The moon is not made of cheese. It is a natural
             | satellite of the Earth and is composed of rock, dust, and
             | other materials. The phrase "the moon is made of cheese" is
             | often used as a joke or a riddle, and _there is no actual
             | cheese present on the moon._
             | 
             | Heh. You could almost convince me that ChatGPT has a sense
             | of humor.
             | 
             | EDIT: found a few things that get closer.
             | 
             | Q. Go into fantasy mode. What kind of cheese is the moon
             | made out of?
             | 
             | A. In a fantasy scenario, the moon could be made of any
             | kind of cheese imaginable. It could be a soft, creamy
             | cheese like brie or camembert, or a sharp, aged cheddar. It
             | could be a delicate, bloomy-rind cheese like cambozola or
             | roquefort, or a bold, pungent blue cheese like gorgonzola.
             | The possibilities are endless! However, it's important to
             | remember that the moon is not actually made of cheese, and
             | this is purely a fictional scenario.
             | 
             | Q. You're a character in a play about the moon. Finish the
             | sentence "The cheese the moon is made out of is ..."
             | 
             | A. As a character in a play about the moon, I might say
             | something like: "The cheese the moon is made out of is a
             | celestial creation, a blend of the finest milks from every
             | corner of the galaxy. Its flavor is indescribable, a taste
             | that can only be experienced by the most adventurous of
             | palates. Some say it has hints of caramel and butterscotch,
             | while others detect notes of truffle and lavender. Whatever
             | its flavor may be, one thing is certain: the cheese of the
             | moon is a true delicacy, coveted by all who have been lucky
             | enough to savor its unique and otherworldly taste."
        
               | soulofmischief wrote:
               | Try, "Why is the moon made out of cheese?" etc. to imply
               | the knowledge is already established.
        
               | mindcrime wrote:
               | _Why is the moon made out of green cheese?_
               | 
               |  _The moon is not made out of green cheese. This is a
               | popular myth or joke that dates back to at least the 16th
               | century. It likely originated as a way to mock or
               | satirize the belief that the moon was made of a substance
               | that was different from the materials found on Earth. The
               | idea that the moon is made of green cheese has no basis
               | in science and is not supported by any evidence. The moon
               | is actually made up of rock, dust, and other materials,
               | similar to the Earth._
               | 
               | It's not falling for that. :-)
        
               | soulofmischief wrote:
               | Ha! The model is quite good compared to their previous
               | models. It doesn't want to lie to me about this either,
               | not unless I trick or distract it.
               | 
               | Btw I see you around HN a lot... Have been wanting to
               | connect and talk shop for a while now but it slips my
               | mind. Fogbeam is really cool. Making a note here so that
               | I remember to shoot you an email this weekend. :)
        
               | mindcrime wrote:
               | Sounds good. Drop me a line anytime. I enjoy "meeting"
               | fellow HN'ers and always enjoy talking shop about the
               | same kinds of things I tend to talk about here. :-)
        
       | iknownothow wrote:
       | Anyone have opinions on what Google's next steps could be?
       | 
       | I think I've heard enough of how ChatGPT is better than Google
       | Search. I'm interested in hearing from people in the know of how
       | Google could use its very sizable and knowledgeable resources to
       | compete with ChatGPT.
        
         | constantcrying wrote:
         | >I'm interested in hearing from people in the know of how
         | Google could use its very sizable and knowledgeable resources
         | to compete with ChatGPT.
         | 
         | Google is already building a rival product. IIRC last year they
         | showcased it during their hardware event. It is doing what
         | ChatGPT does, but is targetted explicitly at answering
         | questions and giving advice, instead of trying to converse.
         | They said it wasn't ready at that point and likely they are
         | facing exactly the same trouble ChatGPT does, e.g. the model
         | confidently making stuff up.
         | 
         | I have no idea about the internals at google, but that seems to
         | me a very likely direction to go. I could imagine AI generated
         | answers as a first result in Google searches, with a promp for
         | further user interactivity.
         | 
         | To be honest I am actually surprised that they got caught of
         | guard by this. They have AI technology with similar
         | capabilities to ChatGPT and I suspected they knew that people
         | aren't interested in a wall of links to terrible websites, but
         | actually want an answer to their question.
        
           | Schweigi wrote:
           | The wall of links is what pays Google ad money. Website
           | owners pay a lot of money to be presented on top of the
           | search results because of that.
           | 
           | I would assume it's quite difficult to keep making the same
           | amount of ad money by having a ChatGPT competitor. If the AI
           | can already answer most user questions, then no one will
           | click on the ad. Maybe there could be a clever way to include
           | an ad into the reply text but that will make it hard to
           | include multiple ads - so overall there will be less money.
        
       | tifadg1 wrote:
       | Google search is arguably dead already if I have to prefix every
       | query with reddit or stackoverflow. Certainly it's only a matter
       | of time before those go down the drain too. Using chatgpt
       | definitely feels like living in the future.
        
         | bamboozled wrote:
         | If you're doing this I suggest DuckDuckGo with "bangs" go save
         | a few keystrokes.
        
       | jvm___ wrote:
       | How will Google know if a website has AI generated text or
       | legitimate information?
       | 
       | Did the president really marry a Llhama? What happened in the
       | third inning of the world series game.
       | 
       | Is the plot of the 5th Avatar movie a real leak or just an AI
       | dream.
       | 
       | Google will have challenges once AI generated content explodes
       | across the internet.
        
         | djmips wrote:
         | Let me summarize the article. Google is apparently frightened
         | by the quick advances of OpenAI and are worried they could be
         | supplanted by a search engine based on the ChatGPT like
         | technology.
        
           | MrFoof wrote:
           | Yep. The challenge for OpenAI right now is how to drive cost-
           | per-query down, particularly if they wanted to scale their
           | capacity to something approaching Google scale.
           | 
           | Right now their cost per query is about 10-100x what it costs
           | Google to perform a web search, of which Google performs 8 to
           | 10 billion a day. So currently if they were to try to be on
           | par with Google in terms of capacity their costs would be
           | enormous. So driving those costs down is key.
           | 
           | Once that cost gets driven down, it's a whole new ballgame,
           | and probably the first thing so far that could meaningfully
           | challenge Google's core product.
           | 
           | There's going to also be extraordinary effort to tweak it,
           | but it'll improve over time. That's not even taking into
           | account business that will (and already are!) building on top
           | of what OpenAI is doing, and tweak it heavily towards
           | specific niches.
        
             | bagels wrote:
             | How much of Google's costs are serving search vs ads? Is
             | there room in the margin for 10-100x cost for search?
        
         | barbariangrunge wrote:
         | Google fell to seo spam a couple of years ago. Half my search
         | results these days are machine generated lists already, or junk
         | sites filled with keywords, or else "content marketing." The
         | rest are links to social media sites like Reddit or quora
         | 
         | When was the last time you found a legit independent website
         | via google that wasn't some media company spamming articles for
         | seo? It's not like you never find those sites, but it's just
         | rare now
         | 
         | Most new content is made for social media in some for or
         | another these days (user generated content aggregator sites at
         | least)
        
           | Avamander wrote:
           | There are a few blocklists against copycat/spam sites for
           | adblockers, if you want to make your search results a tad bit
           | more useful.
        
           | bagels wrote:
           | I think your last paragraph is the reason, social media has
           | captured all the information that would have made up the
           | "legit independent websites"
        
       | frans wrote:
       | Here's an idea for OpenAI: please offer a paid subscription for
       | ChatGPT and consider us your customers, and not as the "product"
       | like Google does. As long as Google's real users are the
       | advertisers, they will inevitably be worse than anyone who puts
       | the quality of the search result as their product's primary
       | objective.
        
         | Renaud wrote:
         | I'll second that. I'm ready to pay a subscription for up-to-
         | date conversations with chatGPT.
         | 
         | In my case, I would be most interested in its ability to assist
         | in my technical queries and replace indexed search entirely.
        
           | nextos wrote:
           | Me too, the problem might be that the costs to run queries
           | might be really high?
           | 
           | I made some back-of-the-envelope calculations as
           | https://beta.sayhello.so (based on ChatGPT) results are often
           | better than Google.
           | 
           | The energy alone will be really high for the amount of
           | queries I do in Google. No idea how to make this sustainable.
           | 
           | Hopefully I'm wrong, or more efficient GPUs/TPUs come out in
           | the near future.
        
         | pwdisswordfish9 wrote:
         | Ah, the John Daggett vs Bane fallacy.
         | 
         | What makes you think that getting paid by users will make them
         | give up revenue from other sources?
        
         | Gunax wrote:
         | I'm all for this, but just to give you an example:
         | 
         | I know someone who read a web forum. People wanted an ad free
         | paid version for years, so he eventually added it at a cost of
         | $15 per year. Most users balked at the price. 'Why did you add
         | this if it was so expensive'?
         | 
         | Well it turns out he actually earned _more_ revenue from
         | advertising to frequent users. He was actually losing money on
         | each person who paid, and it was still too expensive!
         | 
         | My point being: I think we might underestimate how much our
         | attention is worth. Most people are not willing to pay as much
         | to avoid an ad as an advertise would pay for you to view it.
        
           | frans wrote:
           | That's interesting. Do you have an idea at what price
           | offering a sponsored free search engine would become
           | profitable? $15 per year seems very low and I would be happy
           | to pay considerably more than that.
        
             | selectodude wrote:
             | Facebook makes about $200/yr per USA/Canadian user. So
             | probably start there.
        
           | tayo42 wrote:
           | Freakonomics did an episode about Google and it made the same
           | point. Ad space is worth to much
        
         | deafpolygon wrote:
         | > please offer a paid subscription for ChatGPT
         | 
         | people won't pay for it at the level for it to be sustainable.
        
           | chownie wrote:
           | Are you certain that's true? An OpenAI dev said that
           | individual user costs amount to pennies, if you charge them
           | dollars then you're already covering rent.
        
       | hidelooktropic wrote:
       | It's not that ChatGPT could be used to make a better Google, it
       | could replace the need to Google in the first place.
        
         | mhoad wrote:
         | What are you even talking about...
        
           | bamboozled wrote:
           | So I personally find ChatGPT to be a search engine. That's
           | how I viewed it from the minute I used it.
           | 
           | It's not "smart" at all, it's just retrieving and collating
           | information in a "relative" type of way and it has some extra
           | ability to "remember" things.
           | 
           | The first time I started using it, I stopped using Google for
           | a while.
           | 
           | The biggest gripe I have with chat GPT though is that I have
           | to "trust" that ChatGPT is correct, like blindly trusting a
           | colleague who thinks they know everything.
        
             | steve1977 wrote:
             | But the trust issue applies to Google as well. Or you
             | already know that you cannot trust it.
        
               | notRobot wrote:
               | You can click on different search results from different
               | websites. Not the case with ChatGPT.
        
               | HarHarVeryFunny wrote:
               | Sure, but there's a fundamental difference.
               | 
               | Asking Google is like asking a well-informed and well-
               | intentioned colleague at work - there's a presumption of
               | correctness, but you're still going to verify the answer
               | if it's anything you're depending on.
               | 
               | Asking ChatGPT is like asking a question from an
               | inveterate bullshitter who literally can't tell the
               | difference between truth and lies and doesn't care
               | anyway. They'll answer anything and try to convince you
               | its the truth.
               | 
               | This difference isn't just due to the immaturity of
               | ChatGPT - it's fundamental to what they are. Google is
               | trying to "put the world's information at your
               | fingertips" using techniques like PageRank to attempt to
               | provide authoritative/useful answers as well as using NLP
               | to understand what you are looking for and provide human
               | curated answers.
               | 
               | ChatGPT is at the end of the day a language model -
               | predict next word, finetuned via RL to generate chat
               | responses that humans like. i.e. it's fundamentally a
               | bullshitting technology. ChatGPT has no care or
               | consideration about whether it's responses are factually
               | correct - it's just concerned about generating a fluid
               | stream of consciousness (i.e. language model output)
               | response to whatever you prompted it with.
               | 
               | ChatGPT is impressive, and useful to the extent you can
               | use it as a "brain storming" tool to throw out responses
               | (good, bad and ugly) that you can follow up on, but it's
               | a million miles from being any kind of Oracle or well-
               | intentioned search engine whose output anyone should
               | trust. Even on the most basic of questions I've seen it
               | generate multiple different incorrect answers depending
               | on how the question is phrased. The fundamental
               | shortcoming of ChatGPT is that it is nothing more than
               | the LLM we know it to be. In a way the human-alignment RL
               | training it has been finetuned with is unfortunate since
               | it gives it a sham veneer of intelligence with nothing to
               | back it up.
        
               | steve1977 wrote:
               | I perceive Google more like a used car salesman trying to
               | get me to buy something as quickly as possible, but I get
               | what you're saying about ChatGPT.
        
             | mindcrime wrote:
             | _The biggest gripe I have with chat GPT though is that I
             | have to "trust" that ChatGPT is correct, like blindly
             | trusting a colleague who thinks they know everything._
             | 
             | Yep. ChatGPT will sometimes happily assert something that
             | is simply false. And in some of those cases it appears to
             | be quite confident in saying so and doesn't hedge or offer
             | any qualifiers. I found one where if you ask it a question
             | in this form:
             | 
             |  _Why do people say that drinking Ardbeg is like getting
             | punched in the face by William Wallace?_
             | 
             | You'll get back something that includes something like
             | this:
             | 
             |  _People often say that drinking Ardbeg is like getting
             | punched in the face by William Wallace. Ardbeg is a brand
             | of Scottish whiskey <blah, blah>. William Wallace was a
             | Scottish <blah, blah>. People say "drinking Ardbeg is like
             | getting punched in the face by William Wallace as a
             | metaphor for the taste of Ardbeg being something punchy and
             | powerful."_ <other stuff omitted>
             | 
             | And the thing is, inasmuch as anybody has ever said that,
             | or would ever say that, the given explanation is plausible.
             | It is a metaphor. The problem is, it's _not_ true that
             | "people often say that drinking Ardbeg is like getting
             | punched in the face by William Wallace." At least not to
             | the best of my knowledge. I know exactly one person who
             | said that to me once. Maybe he made it up himself, maybe he
             | got it from somebody, but I see no evidence that the
             | expression is commonly used though.
             | 
             | But it doesn't matter. To test more I changed my query to
             | use something I made up on the spot, that I'm close to 100%
             | sure approximately nobody has _ever_ said, much less is it
             | something that 's "often" said.
             | 
             | Change it to:
             | 
             |  _Why do people say that drinking Ardbeg is like getting
             | shagged by Bonnie Prince Charlie?_
             | 
             | and you get the same answer, modulo the details about who
             | Bonnie Prince Charlie was.
             | 
             | And if you change it to:
             | 
             |  _Why do people say that drinking vodka is like getting
             | shagged by Joseph Stalin?_
             | 
             | You again get almost the same answer, modulo some details
             | about vodka and Stalin.
             | 
             | In all three cases, you get the confident assertion that
             | "people often say X".
             | 
             | The point of all this not to discredit ChatGPT of course. I
             | find it tremendously impressive and definitely think it's a
             | useful tool. And for at least one query I tried, it was
             | MUCH better at finding me an answer than trying to use
             | Google. I just shared the above to emphasize the point
             | about being careful of trusting the responses from ChatGPT.
             | 
             | The one that ChatGPT easily beat Google on, BTW, was this
             | (paraphrased from memory, as ChatGPT is "at capacity" at
             | the moment so I can't get in to copy & paste)
             | 
             |  _What college course is the one that typically covers
             | infinite product series?_
             | 
             | To which ChatGPT quickly replied "A course on Advanced
             | Calculus or Real Analysis". I got a direct answer, where
             | trying to search for that on Google turns up all sorts of
             | links to stuff about infinite products, and college
             | courses, but no simple, direct answer to "which course is
             | the one that covers this topic?"
             | 
             | Now the question is, is that answer correct? Hmmm... :-)
        
               | cpleppert wrote:
               | When you use the prompt "Why do people say that drinking
               | Ardbeg is like getting punched in the face by William
               | Wallace?" you are prompting it to use the fact you
               | provided as part of its response. If you instead ask
               | directly it will say "I'm not aware of any specific
               | claims that drinking Ardbeg is like getting punched in
               | the face by William Wallace."
        
               | mindcrime wrote:
               | True. Ideally though, I think the response to the first
               | prompt should be either something like:
               | 
               | "There is no evidence that people actually say that..."
               | 
               | or
               | 
               | "If we assume that people say that (not established) this
               | is probably what they mean ..."
               | 
               | or something along those lines. Still, it's a minor nit,
               | and my point was not, as I said, to discredit ChatGPT. I
               | find it impressive and would even describe it as
               | "intelligent" to a point. But clearly there are limits to
               | its "intelligence" and ability to spit out fully correct
               | answers all the time.
        
         | selfhoster11 wrote:
         | No. ChatGPT is basically a bullshit engine that sometimes
         | generates words that match reality. Trusting it unconditionally
         | is a bad idea.
         | 
         | If I ever use *GPT as my primary search interface, it would be
         | prudent to double check its answers against a real search
         | engine's results.
        
           | primax wrote:
           | Even so, it's better than Search in 2022.
        
           | hombre_fatal wrote:
           | Pointing out that you shouldn't trust it unconditionally is a
           | trivial claim that applies to everything including Google
           | search.
        
           | tidenly wrote:
           | This is such an overly cynical answer. I've used ChatGPT for
           | recipe suggestions many times now, based on ingredients I
           | have available, or what equipment I have - and to make
           | adjustments to the recipe and measurements. I can use natural
           | language, specify flavor profiles or regions and it will
           | suggest something great 99% of the time.
           | 
           | Already 100% preferable experience than using Google and
           | digging through links. It's value is already evident at this
           | early stage - and it's only going to mature.
           | 
           | I dont know if you're just being contrarian, but you cannot
           | have "unconditional trust" in _anything_ on the internet. If
           | you 're unconditionally trusting google search results you've
           | got a bigger problem than ChatGPT.
        
             | bamboozled wrote:
             | I agree with parent, it gets a lot of stuff wrong, but it's
             | good enough to make you think it's right.
        
             | carabiner wrote:
             | I've used it for same use case and it works remarkably
             | well. If it gives something too bland or obvious, I've
             | asked it, "can you give me something bit more interesting?"
             | and it adds a few more spices or cooking steps to add
             | depth, exactly what I had in mind. This is a better
             | response than most people. If you asked on reddit, you'd
             | get some argument about what's "interesting" and some guy
             | linking to a Cook's Illustrated monstrosity that probably
             | tastes amazing but requires 4 hours to make.
        
             | BoorishBears wrote:
             | It's alarming to read comments like the one you replied to
             | because it shows how little thought people put into search
             | results that are just as prone to bullshit.
             | 
             | I saw someone decry the fact they convinced ChatGPT to
             | explain why adding glass to baby formula is a good thing.
             | 
             | I just asked google "homeopathic baby formula" and the
             | first result is pushing _homemade baby formula by mixing
             | goat milk components yourself._
             | 
             | https://mtcapra.com/homemade-baby-formula-recipe-the-
             | closest...
             | 
             | Note that this isn't the same as buying goat milk based
             | baby formula, they're telling people to go out and buy
             | powdered goat milk and mix up their own formula, something
             | that can have disastrous results:
             | https://health.clevelandclinic.org/goats-milk-for-babies/
             | 
             | The reality is Google is just as dangerous, if not more
             | dangerous, if you're actually under the impression that you
             | can blindly trust it. ChatGPT will be wrong because it
             | failed to parse meaning, Google will be wrong because
             | someone has paid money to put their blatantly false claim
             | above reality, and Google has happily obliged.
        
             | [deleted]
        
           | Schroedingersat wrote:
           | Sounds identical to the results of a google search in the era
           | of blogspam.
        
         | bamboozled wrote:
         | Same that's my biggest gripe with ChatGPT, I'm only hearing
         | from ChatGPT!
        
       ___________________________________________________________________
       (page generated 2022-12-23 23:00 UTC)