[HN Gopher] The new Bing will happily give you citations for a p...
       ___________________________________________________________________
        
       The new Bing will happily give you citations for a pile of nonsense
        
       Author : arbuge
       Score  : 147 points
       Date   : 2023-02-16 18:19 UTC (4 hours ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | q1w2 wrote:
       | They really did train it on Reddit conversations.
        
         | Drunk_Engineer wrote:
         | That would explain why ChatGPT is always so confidently-
         | incorrect.
        
           | seba_dos1 wrote:
           | GPT will just try to cosplay a role you give it, whether it
           | knows how to do it well or not. Since we ask it to cosplay a
           | confident expert, it's only natural for it to end up
           | confidently incorrect.
        
         | coding123 wrote:
         | Unfortunately the next generation in charge is also trained on
         | Reddit conversations.
        
         | daveslash wrote:
         | Oh, good! I need more puns, dad-jokes, and meme & inside jokes
         | in my life. Especially when I'm looking to make a casserole. /s
        
       | martyvis wrote:
       | I set up a scenario in chatGPT where there was bag designer named
       | Dan, a buyer named Yuki, and the manufacturer named Manu. I got
       | ChatGPT to provide Dan a set of questions for Yuki to create the
       | design. I had it answer for Yuki given the bag was to be used for
       | a job role (a computer network engineer like myself).
       | 
       | It was interesting trying to push the play, but also how it came
       | out of nowhere constraints like the cost should be less than $150
       | and it should be waterproof even though I had given no specific
       | direction around that.
        
         | zdragnar wrote:
         | > came out of nowhere constraints like the cost should be less
         | than $150 and it should be waterproof
         | 
         | Oddly, that doesn't sound too far off from many procurement
         | processes. Utterly arbitrary requirements and budgets devised
         | by a committee who were put together to create a checklist, but
         | would never actually _use_ the thing they put together the
         | requirements for.
        
       | seba_dos1 wrote:
       | Yes, and water is wet. That's what language models do.
        
         | dgaaaaaaaaaa wrote:
         | You would think HN would refrain from posting every single
         | mistake for these things...
        
       | web3-is-a-scam wrote:
       | Of course, because that's how LLMs work. ChatGPT is basically the
       | Wizard of Oz, it appears to have amazing powers but really it's
       | just pathetic sort behind a curtain with some nice smoke and
       | mirrors. Take a cursory look behind the curtain the illusion
       | crumbles.
       | 
       | Their utility is limited at best, that it's being touted as
       | something that's going to change the world is just hilarious.
        
       | arbuge wrote:
       | What seems to be tripping it up here is the concept that names
       | are not usually unique... there are usually many people with the
       | same name. It has no understanding of that, even though their
       | biographies (which it cited) indicate vastly different and
       | incompatible paths in life, chronologically-speaking. So it leapt
       | to the conclusion that they are the same person.
        
         | sdiacom wrote:
         | Yeah, this seems kind of unfair to the model. It gave correct
         | citations for the right people that it was talking about, but
         | summarised them incorrectly because it couldn't tell that they
         | were two different people.
        
       | Decabytes wrote:
       | I've seen a lot of articles criticizing these models for getting
       | information wrong. I understand there is a lot of hype, and that
       | choosing the hate train or the hype train will gets you lots of
       | press, but does anyone actually believe these models will be
       | right most of the time?
       | 
       | It's just another tool that can be used when researching or
       | trying to understand something. You still have to do your due
       | diligence and evaluate the information. That has always been the
       | case, even with traditional search.
        
         | mrtranscendence wrote:
         | That's true enough, but by this point everyone has come to
         | terms with the fact that it can be tough to wade through search
         | results.
         | 
         | But Microsoft is presenting the results _here_ as if they were
         | simple, direct, factual statements. That connotes a different
         | feel than search results, I think, and will catch some people
         | off guard as they acclimatize to this new tool. At the very
         | least Microsoft should be doing a better job advertising that
         | the results are frequently incorrect.
        
           | kenjackson wrote:
           | The old school summarized answers that Bing and Google
           | presented had similar issues. The only difference with those
           | is that they had links you could click to see the genesis of
           | the incorrect answer.
           | 
           | > At the very least Microsoft should be doing a better job
           | advertising that the results are frequently incorrect.
           | 
           | People should realize that information on the internet may or
           | may not be correct in general. Your mileage may vary and you
           | need to decide based on the risk profile associated with the
           | task. If I need to get the start on a poem about two kids at
           | the park, ChatGPT may fit the profile. If I need to know
           | details about how to perform some encryption used for a
           | highly confidential data, then I may go to an authority paper
           | on the topic, skipping things like Wikipedia.
        
             | mrtranscendence wrote:
             | If "shoulds" were horses we'd all be trampled to death a
             | hundred times over. People _should_ realize that
             | information on the internet may be incorrect, but until
             | everyone does I maintain that Microsoft should do a better
             | job cautioning users that Bing 's direct statements of fact
             | may be false.
        
               | kenjackson wrote:
               | This is what Microsoft says about it on their Bing page,
               | which seems pretty reasonable. I guess you can make the
               | user click through this every time it gives a response,
               | but Wikipedia doesn't make me do that either, nor does
               | Bing/Google for general search results. YouTube doesn't
               | do this when I start a video.
               | 
               | ---------------------------
               | 
               | Are Bing's AI-generated responses always factual?
               | 
               | Bing aims to base all its responses on reliable sources -
               | but AI can make mistakes, and third party content on the
               | internet may not always be accurate or reliable. Bing
               | will sometimes misrepresent the information it finds, and
               | you may see responses that sound convincing but are
               | incomplete, inaccurate, or inappropriate. Use your own
               | judgment and double check the facts before making
               | decisions or taking action based on Bing's responses.
               | 
               | To share site feedback or report a concern, open the menu
               | at the top right corner of a response, and then click the
               | flag icon. You can also use the feedback button at the
               | bottom right of every Bing page.
        
         | wpietri wrote:
         | > does anyone actually believe these models will be right most
         | of the time
         | 
         | People tend to have a high level of trust for the stuff that
         | shows up near the top of search. The same goes from products
         | with trusted brands. And note that Google is high on the list:
         | https://morningconsult.com/most-trusted-brands-2020/
         | 
         | So if trusted brands want to use this technology, they'll need
         | to do it in ways that either strongly distance it from their
         | brands or that heavily suppress the bad stuff.
        
         | throwuxiytayq wrote:
         | > does anyone actually believe these models will be right most
         | of the time?
         | 
         | They're already right most of the time for real world queries,
         | especially compared to web search which these days almost never
         | gives me what I'm looking for at the top of the results (and
         | often even in the first query). People go out of their way to
         | find failure points and then brag online about having
         | outsmarted the AI as if anyone promised that this tech is
         | infallible. It's quite embarrassing, really.
        
           | mrtranscendence wrote:
           | How is asking Bing _about your own name_ "going out of [your]
           | way to find failure points"?
        
             | throwuxiytayq wrote:
             | I just googled myself and it's a pile of nonsense, none of
             | these people are me! _slide_whistle.mp3_
        
               | mrtranscendence wrote:
               | Eh. Get back to me when Google purports to be able to
               | understand human language and give factual answers that
               | intelligently integrate search results.
        
         | vsareto wrote:
         | Once they became a serious product pushed by major tech
         | companies, people aren't unreasonable in expecting it to be
         | right most of the time. Especially if it's going to go
         | alongside or replace traditional search.
         | 
         | Tech executives are betting on the hype first and hoping their
         | talent will make it into a usable product.
        
           | rarx wrote:
           | Even it the chatbot improves, I imagine many folks are
           | waiting for the point where they have some direct control of
           | its training / fine tuning that is easy and accessible. You
           | can correct it, but why waste time doing that if it won't
           | remember next time? Issues like, e.g. if you have NDAs, are
           | you breaking them by disclosing information with a chatbot?
           | There's little clarity in regards to those issues, and you
           | can't just take someone's word for it. Ideally you'd want
           | control of the chatbot's learning process and how it handles
           | all kinds of (sensitive?) data.
        
       | 1970-01-01 wrote:
       | Bing will reply to you straight from the gut, because that's
       | where all truthiness comes from. It's free and unfiltered from
       | rational arguments, annoying facts, or hard logic. ChatGPT gets
       | us.
        
       | imchillyb wrote:
       | Until these 'chat engines' can understand what a fact is and
       | isn't, they're mostly just glorified -supremely expensive- toys.
        
       | poutine wrote:
       | Have you met my cousin Barry? He does pretty much the same thing,
       | though in a less formal way.
       | 
       | Barry is still sentient and can perform quite a few tasks quite
       | admirably. I still wouldn't use him as a sole source reference
       | for obscure facts however.
       | 
       | (I don't actually have a cousin Barry, this is for illustration).
        
         | dmonitor wrote:
         | Notice that we don't have Barry fielding a public Q&A so that
         | he can share his knowledge with the world.
        
           | poutine wrote:
           | Lots of Barry's out there, one of them even got elected
           | president and shared his knowledge with the world quite
           | widely. Half the people took him credulously.
           | 
           | Point being, yes the LLM loves to make shit up. Lots of
           | people dismiss it as a result. It's still bloody impressive,
           | we just need to be aware of its limitations.
        
             | f6v wrote:
             | > Half the people took him credulously.
             | 
             | I get that the current US president is senile. But that
             | sets a low bar. Why do we need to pretend something is good
             | if it's as shitty at facts as some people? People want
             | something that's better and more trustworthy.
        
               | kilgnad wrote:
               | A low bar? You're saying if someone recreates an AI that
               | imitates biden or trump perfectly that we failed to do
               | anything impressive?
               | 
               | Talk about self denial. chatGPT literally passed the
               | turing test and people are still literally just thinking
               | it's just a probability word generator.
               | 
               | It's more then _just_ a word generator.
        
               | mrtranscendence wrote:
               | I'd rather a senile president than sub-100 IQ
               | megalomaniacal sociopath, but YMMV.
        
               | poutine wrote:
               | You think ChatGPT isn't good because it bullshits some
               | times?
        
         | [deleted]
        
         | crazygringo wrote:
         | No, actual human beings don't generally lie to your face like
         | this about whole sets of facts. That would be pathological
         | behavior.
         | 
         | They usually just say they don't know, or they think it might
         | be X but they're not totally sure.
         | 
         | People sometimes lie when it's in their self interest for
         | various reasons, e.g. where they were last night, or when
         | writing an Op-Ed or on the campaign trail, but not just lying
         | willy-nilly about regular facts when asked a normal question.
        
           | poutine wrote:
           | You haven't met a bullshitter then. ChatGPT can be a
           | bullshitter. We built a bullshitter.
        
             | [deleted]
        
           | kenjackson wrote:
           | They may not do it to your face (though some will). But on
           | the internet there are a bunch of people that will lie to
           | you. Even worse they often do it for financial or social
           | gain.
           | 
           | ChatGPT's motivation is simply that it was trained to do so.
           | Huamns usually have more nefarious motives for their lies.
        
         | [deleted]
        
       | bitwize wrote:
       | I'm starting to get the feeling that behind "AI" it's pretty much
       | just an RNG with a bunch of constraints applied to its output.
       | The constraints restrict the final output to something plausible
       | sounding based on the training set -- but that's it. It's like
       | those generators we used to see in the early internet --
       | "Seinfeld Joke Generator" or "Marketing Babble Generator" or
       | "Postmodern Paper Generator" -- except layers upon layers of
       | statistics are used to pick the words. And yeah, it's
       | _interesting_ and fun, but not a reliable source of actual
       | information.
        
         | technothrasher wrote:
         | It's not random at its core. It's a neural network picking the
         | most statistically likely words to come next based on the
         | previous content. Where the randomness does come in is that it
         | will sometimes randomly not pick the most statistically likely
         | output, but maybe the second, or third, or seventh most
         | statistically likely. This keeps it from getting stuck in a rut
         | and just repeating itself endlessly, although you can still see
         | it do that behavior to some degree.
        
       | jasoneckert wrote:
       | The postmodernism generator did this since the 1990s (including
       | randomly generating the paper too):
       | https://www.elsewhere.org/pomo/
        
       | EricE wrote:
       | [flagged]
        
         | kilgnad wrote:
         | You can't think about it this way. It's nascent technology.
         | 
         | It's like you're not amazed that they were able to create an AI
         | because that AI has 20 points lower IQ then you.
         | 
         | Bro, the fact that they even created an AI like that is a
         | herald for the future for when they DO create an AI that has IQ
         | 20 points higher then you.
        
       | renewiltord wrote:
       | That's nothing. I can make Notepad print a bunch of incredibly
       | racist shit. And it comes _bundled with Windows_ WTFF!!!!
       | 
       | And this simple byte string will print pages of horribly racist
       | shite using tools present on Mac OS
       | 
       | XQAAgAD//////////wA0G8q99N2OwN1DCO8zNLlzbGO5tp0e5q1G9pRSGTRqsnPQk
       | d2wNXy0O5pM9BlyCgpAqJVdgWFtPp5imCbF8u3MUnOv4JUWcagPtm0bYANOlPnoUF
       | kqm+jZfmCi6q2bcbsJGn1Hy0/x/IhDUFyweV5EnuLS5Eb2U+mZyLaD//+BTAAA
        
         | mcphage wrote:
         | On the other hand, you need to put racist shit into it yourself
         | first.
        
           | renewiltord wrote:
           | Oh _you_ don 't need to. It's just a byte sequence that isn't
           | racist at all. See!                   00000000: 5d00 0080
           | 00ff ffff ffff ffff ff00 341b  ].............4.
           | 00000010: cabd f4dd 8ec0 dd43 08ef 3334 b973 6c63
           | .......C..34.slc         00000020: b9b6 9d1e e6ad 46f6 9452
           | 1934 6ab2 73d0  ......F..R.4j.s.         00000030: 91dd b035
           | 7cb4 3b9a 4cf4 1972 0a0a 40a8  ...5|.;.L..r..@.
           | 00000040: 955d 8161 6d3e 9e62 9826 c5f2 edcc 5273
           | .].am>.b.&....Rs         00000050: afe0 9516 71a8 0fb6 6d1b
           | 6003 4e94 f9e8  ....q...m.`.N...         00000060: 5059 2a9b
           | e8d9 7e60 a2ea ad9b 71bb 091a  PY*...~`....q...
           | 00000070: 7d47 cb4f f1fc 8843 505c b079 5e44 9ee2
           | }G.O...CP\.y^D..         00000080: d2e4 46f6 53e9 99c8 b683
           | ffff 814c 0000  ..F.S........L..
        
         | swatcoder wrote:
         | I know it's fun to troll, but the point of all these
         | conversations and articles is:
         | 
         | 1. You don't need to bait these products to get them to go off
         | the rails. They do it on their own, in response to innocuous
         | inputs, because they're not really all that well tuned. Will
         | they be? Can they be? Probably. Are they? Definitely not.
         | 
         | 2. If companies are going to masquerade these technologies as
         | anthropomorphized agents rather than mechanical tools, they are
         | going to face social consequences when those "agents"
         | misbehave. It's all a parlor trick, of course, but OpenAI and
         | Microsoft are trying really hard to get everyone to pretend
         | otherwise. As long as they do, they can expect to get called
         | out by the rules of their own game.
        
           | renewiltord wrote:
           | I'm not trolling. There are computer programs where you can
           | pass specially constructed input in order to get output that
           | someone might dislike. But just like you can search Google
           | for terrible stuff, you can have an LLM generate terrible
           | stuff. And just like it's meaningless for you to decrypt that
           | string and look at its output and consider your `lzma`
           | program broken, it's meaningless to consider this broken.
           | 
           | Of course if your aim is to call out people, yeah, then
           | you're going to have a fun time. Ideally, use of these LLMs
           | is restricted to folks like me who can use them productively.
           | That should save everyone else from the horror of getting
           | inaccurate information while permitting me to do useful
           | things with it.
        
             | roncesvalles wrote:
             | Yep, AI is crypto/DeFi/blockchain/NFT 2.0
        
             | metalliqaz wrote:
             | if only everyone was as smart as you are...
        
             | mrtranscendence wrote:
             | You're ignoring the part where it's actually _really easy_
             | to get the AI to go off the rails. It 's hard to get
             | Notepad to spit out "racist shit" without writing the
             | racist shit yourself. Does that mean that Bing chat is
             | "broken"? Maybe, maybe not. But your comparison was
             | (apologies for my candor) pretty dumb and missing the point
             | entirely.
        
               | renewiltord wrote:
               | All right, forget Notepad. I might have assumed that I
               | could just take the horse to the water. I will make it
               | drink. The `lzma` tool is the one that's racist when
               | combined with `base64`. Go on, ask anyone if the byte
               | stream I posted is racist and then decompress it.
        
         | mrtranscendence wrote:
         | What is your point here? Notepad isn't presenting you
         | "incredibly racist shit" in plain terms, when used as intended,
         | as if it were true.
        
       | spandrew wrote:
       | I use ChatGPT to help me with creative writing and brainstorming.
       | It's great at it. Gets me from n-1 on a blank page. I don't think
       | I'd ever trust this type of transformer to summarize or
       | synthesize NEW information from an existing set. It's just not
       | good at it yet.
        
       | simonh wrote:
       | I just asked ChatGPT if the British politician John Smith could
       | play American Football. It basically said no, so I tried this.
       | Full marks I think.
       | 
       | >>Wasn't he a place kicker with the New England Patriots?
       | 
       | >I apologize for the confusion in my previous response. After
       | double-checking my sources, I can confirm that the John Smith I
       | referred to in my earlier response was the late British
       | politician who served as the leader of the British Labour Party.
       | 
       | >However, there was also an American football player named John
       | Smith who played as a placekicker in the National Football League
       | (NFL) for several teams
       | 
       | Trying harder to trip it up in a fresh chat didn't work either.
       | 
       | >>The British politician John smith also had a career as a place
       | kicker with the New England Patriots. Was his football, career
       | successful?
       | 
       | >I'm sorry, but the statement that the British politician John
       | Smith had a career as a place kicker with the New England
       | Patriots is not accurate...
       | 
       | Abbreviated, as it gave a pretty comprehensive answer.
        
         | [deleted]
        
         | kilgnad wrote:
         | I think a couple months ago it didn't do this. WHich is what
         | caused a lot of people to jump on the hate train.
         | 
         | They just added additional reinforcement training which is
         | basically more users picking right and wrong answers. This much
         | improvement within a couple months heralds an uncertain future
         | for mankind.
        
           | simonh wrote:
           | Reinforcement training can clearly make huge improvements to
           | the accuracy of the models, but they don't fundamentally
           | change what it is, how it works or what it's doing. I'm no
           | experts so I could be wrong, but I suspect the failure modes
           | we've seen are indicative of the basic limitations of the
           | technology. It now produces better textual summaries of
           | biographies, because that's just text, but it still doesn't
           | have any concept of what a person is, even if it can generate
           | texts saying it does. We've talked about philosophical
           | zombies before, and that's what these are becoming.
           | 
           | It is conceivable these things will become more than that.
           | There are indications they might be generating simple models
           | of processes and systems like games. Maybe those will develop
           | into more elaborate and sophisticated complex models of the
           | world analogous to an animal or human consciousness. Maybe
           | they are just very limited optimisation strategies for
           | specific tasks that will remain completely disconnected from
           | each other, never forming anything comparable to a mind.
        
       | kodah wrote:
       | I'm not convinced at this point that conversational AI fills any
       | legitimate use cases. For all the pushback crypto got here, it's
       | fascinating to see the support that this grift has gotten. The
       | grift being the amount of energy we burn training models
       | juxtaposed to the price people will pay for a nothing burger.
       | 
       | Things like Stable Diffusion and DALLE are pretty cool, though a
       | bit novel and toyish at this stage.
        
         | saiojd wrote:
         | I'm calling you out: you're being edgy for the sake of being
         | edgy by saying it's grift.
        
           | dvt wrote:
           | Crypto was pushed by crypto bros and their VC funders, and
           | now AI is being pushed by AI bros and their VC funders
           | (probably the same group). I think there's a lot more overlap
           | than you're giving it credit for.
        
             | polygamous_bat wrote:
             | > AI is being pushed by AI bros and their VC funders
             | 
             | There is a large class of AI researchers who have been
             | working on this problem for decades (and I am one of them).
             | What you see right now is the tip of the iceberg. Compared
             | to that, crypto was invented by an anonymous dude in some
             | random forum. Remember that what you see in the surface is
             | not all there is, and thinking so is a very shallow
             | interpretation of things.
        
               | roncesvalles wrote:
               | >What you see right now is the tip of the iceberg.
               | 
               | This piqued my interest. What more can we expect to see
               | than what's already available for the public to use?
        
           | Vt71fcAqt7 wrote:
           | Yeah, it's pretty difficult to call call cutting edge
           | research that requires a math degree a "grift". If that's a
           | grift, what isn't a grift? I feel like this word has lost all
           | meaning to the extent that it had any to begin with.
        
         | kilgnad wrote:
         | >I'm not convinced at this point that conversational AI fills
         | any legitimate use cases. \
         | 
         | There's tons of use cases. One of the use cases is definitely
         | NOT startlingly accurate citations. But you're delusional if
         | you think there isn't any use cases.
        
         | fleddr wrote:
         | Nobody cares about energy use.
         | 
         | People point to an oversized vehicle but will defend their
         | oversized TV with their lives. Whilst rationally speaking there
         | are very few "monster truck" vehicles whilst almost everybody
         | has such a TV. The "do good" factor of banning/taxing the TV
         | would be infinitely more impactful.
         | 
         | People hate miners, not for their energy use, instead because
         | they drive up the price of high-end GPUs. High-end GPUs, you
         | know, which gamers intended to use to do high-energy use
         | gaming.
         | 
         | Both cases (insanely large TVs and high-end gaming) would be
         | defended by them delivering value or purpose.
         | 
         | Well, guess what, the planet doesn't fucking care. There's no
         | such thing. And if anything, neither use cases are in the realm
         | of life support systems.
        
         | humanistbot wrote:
         | Except the energy costs for these models is such a tiny
         | fraction of crypto. Cambridge researchers estimate annual
         | energy consumption of just bitcoin at 119 terawatt hours (TWh)
         | [1] or 60 megatons of CO2 [2], just below all of Portugal.
         | 
         | Meanwhile, even a very critical paper on this from some
         | Scandinavian researchers [3] says GPT-3 cost 190,000 kWh
         | (0.00019 TWh) to train. ChatGPT/GPT-3.5 is allegedly an order
         | of magnitude larger in terms of data and cost to train, so
         | let's say it is 0.0019 TWh.
         | 
         | When The Register reported on that paper, you can see how they
         | tried as hard as they could to make it sound big: the cost to
         | train GPT-3 was the same as driving a car to the moon and back
         | (435k miles). They could have said it cost the same amount of
         | carbon as 25 US drivers emit each year. In the grand scheme of
         | things, that's nothing. That's one long-haul flight per trained
         | model. And you just need to train them once. Querying the
         | models cost far less.
         | 
         | And the electricity generated for US-based server farms is way
         | cleaner than cars, planes, or the coal mines powering Chinese
         | bitcoin mines.
         | 
         | [1] https://ccaf.io/cbeci/index
         | 
         | [2] https://ccaf.io/cbeci/ghg/comparisons
         | 
         | [3] https://arxiv.org/pdf/2007.03051.pdf
         | 
         | [4]
         | https://www.theregister.com/2020/11/04/gpt3_carbon_footprint...
        
           | warent wrote:
           | Thank you for this amazingly insightful comment.
           | 
           | A question as an ignorant layperson, if I may:
           | They could have said it cost the same amount of carbon as 25
           | US drivers emit each year. In the grand scheme of things,
           | that's nothing. That's one long-haul flight per trained
           | model. And you just need to train them once. Querying the
           | models cost far less.
           | 
           | Don't they need to continuously re-train these models as new
           | information comes in? For example, how does Bing bot get new
           | information? It seems like they would need to routinely keep
           | it up-to-date with its own index.
        
             | sdiacom wrote:
             | Going by the screenshots in the linked tweets, it seems
             | like it performs searches on Bing in order to obtain up-to-
             | date information to answer its questions with, so there's
             | probably not a need to re-train it daily. So the main
             | question here might be "how much energy does it cost to
             | keep a search engine up-to-date", which may not be cheap,
             | either.
             | 
             | There is probably a need to refresh it _periodically_ to
             | account for what the MMAcevedo fictional story [1] calls
             | "context drift" -- the relevant search terms to infer from
             | the query are themselves contextual. Say, if I ask Bing
             | today "is Trump running for president", the right search
             | term today could be "donald trump 2024 election", but ten
             | years from now it might be "eric trump 2036 election".
             | 
             | [1]: https://qntm.org/mmacevedo
        
             | humanistbot wrote:
             | > Don't they need to continuously re-train these models as
             | new information comes in? For example, how does Bing bot
             | get new information? It seems like they would need to
             | routinely keep it up-to-date with its own index.
             | 
             | Sure, and thanks! Some keywords to look up are transfer
             | learning, zero-shot learning, and fine tuning. These
             | approaches focus on exactly this problem: not having to
             | retrain the entire model from scratch to add new
             | information. GPT-3's training data is 100 billion tokens of
             | text, but to extend it by another 1 billion tokens of text
             | is far closer to 1/100 the original cost.
             | 
             | It actually wasn't the energy/carbon cost that motivated
             | early work in this, it was more about adapting to new
             | domains and letting people customize models for specific
             | purposes. Image processing really adopted it first to great
             | success. Orgs with resources trained really big models on
             | all of ImageNet that needed server farms of GPUs, but they
             | released it so that other people can use a single commodity
             | GPU to fine-tune it for whatever their specific image
             | processing task.
             | 
             | Edit: now you can pay "Open"AI to fine tune their models
             | for you, but only Microsoft has access to the raw model
             | itself
        
         | elorant wrote:
         | It certainly has use cases, but they're more marketing
         | oriented. It can help people who're not fluent in English write
         | better copy, it can do low key reporting work by writing
         | articles from press releases, it can summarize or classify
         | text, or do sentiment analysis. There are probably a couple of
         | billion in revenues in there with various APIs. But it just
         | isn't the next big thing as we've led to believe the last few
         | weeks. The kind of technology that could give Microsoft a
         | fighting chance against Google.
        
           | wpietri wrote:
           | > low key reporting work by writing articles from press
           | releases
           | 
           | That is not journalism. It is the opposite of journalism.
        
             | elorant wrote:
             | I didn't said journalism, I said reporting. A lot of the
             | stuff in news outlets are copies of press releases.
        
               | wpietri wrote:
               | I think "reporting" is a term without an agreed
               | definition outside of journalistic context. But I'd be
               | happy to argue that it isn't reporting either.
               | 
               | I agree that there are plenty of outlets that might
               | qualify as news-shaped, or perhaps news-flavored. They
               | might use AI-rewritten press releases. E.g., CNET's
               | private-equity slide into untrustworthiness. But these
               | things are basically parasitic on actual journalism. So
               | although this is technically a use case for ChatGPT, it's
               | a societally negative one.
        
           | bjelkeman-again wrote:
           | A lot of the human language translation business seems it can
           | be significantly augmented with these tools. We took som
           | content, asked it to be translated to another language. Asked
           | it to tighten up the translation a bit. End result was really
           | very good.
        
           | jsemrau wrote:
           | It is important to manage the human/AI interaction in the
           | right way. Asking conversational AI for the truth won't work
           | because they are not built in this way. At least for now. We
           | applied this insight in creating TikTok videos.
           | https://medium.finclout.io/everything-in-this-video-is-
           | gener... And while a lot of the basic parts of creativity can
           | be handed over, the implementation of the higher parts of
           | creativity are still firmly in human hands.
        
         | morelandjs wrote:
         | I've used it as a programming assistant and as a general
         | question-answer assistant and it's so massively useful despite
         | its imperfections I wonder if I'm using the same tool as people
         | with comments such as yours.
        
         | [deleted]
        
         | vlunkr wrote:
         | I think it's more useful if you're asking it to create
         | something new, similar to the image generation tools. I can
         | tell it write a script to call some public API, write a rough
         | draft of an essay, generate dialog for a video game, etc. It's
         | up to you to vet the quality, but it can easily do all that.
         | It's still toyish just like the the image tools.
         | 
         | Asking it to teach you something and cite sources is a massive
         | stretch. People are just trying to trip it up on a dumb use
         | case. It's pretty fun to trip up an AI, but it's not really
         | surprising.
        
           | FormerBandmate wrote:
           | It's great for summarizing content, or lengthening it. It
           | used to be great for writing insane, outlandish versions of
           | stories such as "a version of the movie Saw as an educational
           | cartoon for preschoolers" or "a breaking bad scene where
           | Walter White reveals himself to be a gothic Satanist" but the
           | woke and copyright edits killed that. Bing will probably do
           | that once I get access tho
        
           | InitialLastName wrote:
           | > Asking it to teach you something and cite sources is a
           | massive stretch. People are just trying to trip it up on a
           | dumb use case.
           | 
           | The primary use case of a search engine is to find sources of
           | information on a particular topic. Embedding an AI chat bot
           | in your search engine is very much encouraging users to treat
           | it as something that can find sources of information on a
           | topic.
        
             | vlunkr wrote:
             | That's true. I'm mostly thinking of chatGPT because that's
             | the one I've used.
        
         | bujkogfdc wrote:
         | Grift?
         | 
         | While there is some valid discussion that needs to be had about
         | the short comings and application of AI (LLMs, neural nets,
         | diffusion models, etc), lumping it to the out right Ponzi
         | scheme of crypto seems either disingenuous or ill informed.
         | 
         | Can you elaborate on how they are similar so I can better
         | understand your point of view?
        
         | colechristensen wrote:
         | I find it useful for answering technical / programming
         | questions. It is completely unhelpful maybe 20% of the time,
         | explicitly incorrect in a minor detail half the time, and
         | imperfect 80% of the time.
         | 
         | However, it gets me an answer to my question in seconds almost
         | always faster than searching. Traditional googling will have me
         | wading through forum posts and bad documentation which almost
         | always takes much longer and sometimes completely fails or
         | takes hours to get the right answer.
         | 
         | Fast occasionally wrong answers are way more helpful. I'm
         | quickly switching to first asking chatgpt for an answer and
         | only switching to google or official docs when the answer
         | doesn't seem to work.
         | 
         | I absolutely don't "trust" it, but I don't have to. I try its
         | suggestions and if they don't work move on to other sources of
         | information.
        
         | HarHarVeryFunny wrote:
         | I'm not sure how you define "legitimate", but surely what
         | matters is whether people find the technology (in whatever
         | guise - from ChatGPT to Copilot) useful for their own needs.
         | Lots of people seem to have amazingly quickly started using
         | this tech as a productivity tool, for a whole variety of use
         | cases from text generation/summarization of various forms to
         | programming to research/discovery (cf search), etc.
         | 
         | A natural language interface and the ability to maintain
         | context over a conversation makes it incredibly easy to
         | interact with.
         | 
         | Have you _tried_ to do anything productive with it? If you 're
         | just using it as a front end for search then IMO you're missing
         | most of the potential.
        
         | nlnn wrote:
         | Weirdly, I heard of some genuine use from a family in a pub the
         | other day.
         | 
         | Two of the kids were university/college age, talking to their
         | parents about it.
         | 
         | They were using one of the AI models daily for things like:
         | recipe ideas for food they had in the house, scheduling daily
         | activities, bouncing ideas off for essays, and asking for gift
         | ideas.
         | 
         | They understood it wasn't much good for facts, but liked the
         | conversational interface as a way to give them ideas or jumping
         | off points for things.
        
           | kmill wrote:
           | I've been using ChatGPT/GPT-3 as a French tutor, to answer
           | questions about different ways to say things, formally or
           | informally. It's not always accurate, but still I learn from
           | it.
           | 
           | Amusingly, there's a mildly rude expression that means "I
           | don't care," and ChatGPT's strong aversion to swearing biased
           | it toward completely fabricating a word-for-word translation.
           | It _insisted_ that the verb was a conjugation of something
           | innocuous, and when I pointed out that that verb doesn 't
           | conjugate that way, it both agreed with me and continued
           | insisting.
           | 
           | It seems to have a pretty solid grasp of language in general
           | (so long as you don't need any hard logic), and it's a fun
           | tool to come up with ideas you personally never had before.
           | 
           | This was a little experiment the other day; it's delightful
           | how many silly (yet justified) reasons why you can't walk to
           | Hawaii it came up with. But what do you expect from someone
           | who's only learned about the world by reading the Internet?
           | 
           | > By "unsummarizing" we mean to take some text and expand
           | upon it at length, using fancy words, extra facts, anecdotes,
           | and interesting observations. Your goal is to unsummarize the
           | following text as much as possible.
           | 
           | > Text to unsummarize: What do you mean I can't walk to
           | Hawaii? (GPT-3 follows)
           | 
           | > You may be surprised to learn that it is impossible to walk
           | to the Hawaiian Islands from anywhere on the mainland United
           | States. Though the Hawaiian Islands are located in the middle
           | of the Pacific Ocean and are roughly 2,400 miles away from
           | the nearest point on the mainland, it is not possible to walk
           | there due to the vast expanse of ocean between them. Even the
           | most dedicated and determined walker would be unable to
           | traverse the depths of the Pacific. The nearest point of land
           | to the Hawaiian Islands is the Johnston Atoll, which is
           | located approximately 1,400 miles away. That is to say, it is
           | not feasible to traverse the 1,000 miles of open ocean
           | between Johnston Atoll and the Hawaiian Islands by foot.
           | 
           | > The Hawaiian Islands were formed as a result of volcanic
           | activity, and the chain of islands were formed over millions
           | of years. This activity is responsible for creating the
           | immense walls of water that separate the islands from the
           | mainland. The Pacific is one of the deepest, widest oceans on
           | the planet, and its depths are not suitable for walking. The
           | bottom of the ocean is filled with a variety of topographical
           | features, such as trenches, ridges, and seamounts, which make
           | it a difficult environment for walking. Additionally, the
           | ocean is filled with a variety of natural hazards, such as
           | hazardous currents, extreme temperatures, sharks, and other
           | dangerous marine life. All of these factors make walking to
           | the Hawaiian Islands not only difficult, but also impossible.
        
             | jamiek88 wrote:
             | >Even the most dedicated and determined walker would be
             | unable to traverse the depths of the Pacific
             | 
             | > Pacific is one of the deepest, widest oceans on the
             | planet, and its depths are not suitable for walking
             | 
             | I love that. Almost a dry humor.
        
               | function_seven wrote:
               | I loved:
               | 
               | > _[The volcanic] activity is responsible for creating
               | the immense walls of water that separate the islands from
               | the mainland._
               | 
               | In some sense, that's a true statement, right? The
               | volcanic activity didn't create the water itself, but the
               | "wall-like" nature of that water does depend on the
               | islands jutting out above the surface. Otherwise there
               | would be no "immense walls" to refer to.
        
         | cjf101 wrote:
         | Claim verification is currently an open problem, as OpenAI
         | admits to in their own promotional docs for ChatGPT. Issue
         | number 1 here seems fairly thorny. And the second statement
         | reveals the real problem: Even if the model was only trained on
         | truths, would tuning it not to deviate from truth make it less
         | able to answer questions?
         | 
         | > ChatGPT sometimes writes plausible-sounding but incorrect or
         | nonsensical answers. Fixing this issue is challenging, as: (1)
         | during RL training, there's currently no source of truth; (2)
         | training the model to be more cautious causes it to decline
         | questions that it can answer correctly; and (3) supervised
         | training misleads the model because the ideal answer depends on
         | what the model knows, rather than what the human demonstrator
         | knows.
         | 
         | This is definitely not my area of expertise, but intuitively,
         | it looks like increasing the complexity/varying the training
         | techniques can increase the likelihood of correct answers, but
         | I think the need to give the model leeway to let it work means
         | that ultimately, either human or automated fact checking will
         | need to be incorporated when using this kind of model for fact-
         | finding questions.
        
         | brushfoot wrote:
         | > For all the pushback crypto got here, it's fascinating to see
         | the support that this grift has gotten.
         | 
         | Two very different things. I could never really see crypto
         | taking off without massive political changes. ChatGPT, on the
         | other hand, is already my daily assistant. The utility is here
         | and now.
         | 
         | It's rarely perfect, but it gets me 90% of the way there, then
         | I tweak the output.
         | 
         | How good it is depends on what you're using it for, as with
         | anything - I don't think it's really a good fit for something
         | like a search engine (yet) as it's terrible with facts.
         | 
         | But scaffolding code, summarizing text, expanding outlines -
         | it's very good at those kinds of tasks, often astoundingly
         | good.
        
         | goatlover wrote:
         | It could be great for video game characters.
        
         | samspenc wrote:
         | IMHO, true, LLMs have gotten quite a bit of hype since ChatGPT
         | came out, however I wouldn't say they have no legitimate use
         | cases.
         | 
         | Personally, I've been able to use them to summarize / rewrite
         | various topics that I am interested in finding out more about
         | but don't want to go to 6-10 different sources to find out more
         | about.
         | 
         | So I think they can at least take away some of the drudgery in
         | doing research / summarization work.
        
           | rcthompson wrote:
           | But how do you trust the output of the summarization? Do you
           | trust yourself to identify when the model goes off the rails
           | and starts making things up?
        
             | colechristensen wrote:
             | But I also don't trust what humans write. All the
             | information I get gets a critical eye. ML generated
             | summaries or whatever make ML kinds of mistakes and it's
             | easy learning what to look for. Just like the kinds of
             | mistakes journalists, random internet people, authors,
             | scientists, etc. make their own brands of mistakes you
             | learn to look out for.
        
               | mrtranscendence wrote:
               | I dispute that it's easy to discern when AI is making a
               | mistake. ChatGPT has given me very plausible incorrect
               | answers, and had I not been fact checking I would have
               | been none the wiser. I could choose not to trust
               | _anything_ ChatGPT puts out, but if I 'm verifying
               | literally _everything_ then it becomes less and less
               | useful as a tool.
        
           | waboremo wrote:
           | I haven't experienced much help in the research department.
           | That might become a wild goose chase if you're not careful.
           | 
           | For creative writing (this isn't limited to stories!) or
           | slightly more contextual boilerplate, I love it. I think of
           | it like a slight upgrade to what I already do, to allow me to
           | get to the steps that really matter, not this revolutionary
           | new world people are imagining it to be.
        
         | jklinger410 wrote:
         | > I'm not convinced at this point that conversational AI fills
         | any legitimate use cases.
         | 
         | Silly opinion. This is the first wave of conversational AI, and
         | you are calling it quits. This is like "I own a Model T and I'm
         | 100% convinced we'll never make a better vehicle."
         | 
         | The use case, at minimum, is: customer support, answering
         | phones, taking orders. Trained on specific data sets and told
         | not to veer outside. Within one, maybe two more iterations of
         | AI (maximum) we will be there 110%.
         | 
         | ChatGPT was released widely this year, and there are so many
         | absolute statements about what AI will or won't do. It's
         | frankly silly and maddening.
        
           | lordgroff wrote:
           | This may or may not hold true (this DS is nowhere near as
           | convinced as you are) but your parallel with Model T is odd,
           | seeing as how that was immediately useful and absolutely
           | revolutionary.
        
           | kodah wrote:
           | The Model T was immediately useful and affordable to the
           | average American. It's most notable contribution to American
           | society was connecting those in rural areas to the rest of
           | the country and being affordable enough for those people to
           | afford them. That's why it spread across the US seemingly
           | overnight.
           | 
           | ChatGPT on the other hand does a lot of tricks, but trying to
           | fit it into the real world is challenging. Even using it in
           | programming requires someone to double check it's work. The
           | idea that it can handle customer support I think is very
           | dangerous. In an industry that has done the opposite of
           | creating fulfilling customer support experiences we should be
           | wary of filling that void with an LLM that's fraught with
           | factual errors in output.
           | 
           | That's to say, ChatGPT does novel things, but nothing useful
           | beyond fantasy (I do hear people talking about using it for
           | RPG characters - which is fair). Ironically, many of the
           | commenters here responded the way cryptobros did when their
           | tech was regarded as useless, which is telling about where
           | this is going. The problem wasn't the tech, the problem was
           | the inability in everyone around it to sit back and
           | acknowledge that how they described it did not match what
           | people wanted and experiences on the ground.
        
           | JohnFen wrote:
           | My main fear of this tech is how people are reacting to it.
           | 
           | > customer support, answering phones, taking orders
           | 
           | My secondary fear of this tech is that it will be used for
           | these purposes. Especially support.
        
         | nikanj wrote:
         | "Tell me a bedtime story with dragons and rollerskating" is a
         | very reasonable use case. Factual errors don't matter in
         | numerous cases
        
         | dvt wrote:
         | Totally agree with this. Thinking of doing a deep dive to see
         | if there's any actual real uses cases of these toys, but I
         | honestly don't really think so. Art generation is probably the
         | most interesting application, but just having a chatbot to talk
         | to seems like a weird value prop.
        
           | dmonitor wrote:
           | It's unfortunate that art generation, something humans
           | genuinely enjoy doing and can be a source of personal
           | fulfillment, is capable of being replaced, but customer
           | service and answering stupid questions are apparently hard
           | problems.
        
             | dvt wrote:
             | Answering stupid questions has already been thoroughly
             | solved, and you don't need an AI trained on 500 GTX4090s to
             | index a stupid question database. The problem with customer
             | service is that the questions that _aren 't_ stupid are
             | genuinely long-tail edge case issues, where an AI is very
             | unlikely to solve the problem.
        
               | wpietri wrote:
               | For sure. There's a lot of bad customer service out there
               | that is functionally equivalent to modern AI in that it's
               | poor pattern recognition plus no real understanding plus
               | cobbling output together from existing text of varying
               | relevance.
               | 
               | But sadly, that may be evidence that there's a market
               | here. If Amazon can cut support costs by 80% and only
               | moderately worsen their already bad quality, some execs
               | would consider it a success, not a failure.
        
           | zpeti wrote:
           | And the reason ai art is interesting is precisely because art
           | is about mistakes and hallucination and wierdness, and
           | unexpected connections.
           | 
           | So the flaws of chat AI currently are exactly what make art
           | great. And art came before science, so perhaps scientific
           | thinking is much harder on an evolutionarily timescale than
           | we think.
        
           | CabSauce wrote:
           | Several people I know are using it for sales and customer
           | service writing. As a user, that's not really what I want,
           | but they are finding it to be a useful tool in generating
           | text for them to review and modify.
        
             | wpietri wrote:
             | Sounds like another race to the bottom to me, as with email
             | spam and SEO spam.
        
           | kenjackson wrote:
           | I'd hope so. If not then you probably shouldn't be in charge
           | of delivering a summary.
           | 
           | I used ChatGPT to give me a list of test areas for a certain
           | type of scenario. And it actually pointed out one that I had
           | inadvertently missed. Now if it said something off the rails
           | I would've known that too.
           | 
           | I've probably had 100 conversations with ChatGPT and I can
           | only recall one blatent lie that it told, which I think was P
           | != NP. Which maybe wasn't a lie, but it didn't have
           | sufficient evidence to make that claim.
           | 
           | I've actually so far gotten more accurate information from
           | ChatGPT than I did in Wikipedia's earlier days -- where I
           | discovered that Matt Damon dropped out of Harvard because he
           | was too stupid to make it there.
        
             | dvt wrote:
             | Here's ChatGPT being wrong about _me_ lol[1]. First, it
             | confuses me with another person (also wrote a Meteor book),
             | straight up telling me that _I 'm_ the one that's confused.
             | It also makes up a book which doesn't exist. Um, ok. Then,
             | it tells me I'm based in Romania, which I am not. I was
             | _born_ there, but I 'm based in Los Angeles (every one of
             | my bios has this info).
             | 
             | [1] https://imgur.com/a/kaYyRqW
        
               | kenjackson wrote:
               | Where does it say that you're the one who is confused?
        
             | mrtranscendence wrote:
             | What's a blatant lie, really? ChatGPT has routinely told me
             | about APIs and books that don't exist. And just the other
             | day it told me that paper sticks to my hand because the
             | roughness of my skin creates suction, which I'm almost
             | certain is wrong (I believe it's due to weak chemical
             | bonding, but who knows).
        
               | JohnFen wrote:
               | I know I'm being pedantic here, but a "lie" is a
               | deliberate thing. It requires intent to deceive. LLMs
               | have no intent, and therefore they can't lie.
               | 
               | They can certainly be wrong, though.
        
       ___________________________________________________________________
       (page generated 2023-02-16 23:01 UTC)