[HN Gopher] I made ChatGPT and Bing AI have a conversation
___________________________________________________________________
I made ChatGPT and Bing AI have a conversation
Author : 7moritz7
Score : 140 points
Date : 2023-02-11 18:07 UTC (4 hours ago)
(HTM) web link (moritz.pm)
(TXT) w3m dump (moritz.pm)
| standardUser wrote:
| > Bing AI: Thank you for talking with me today, it was a
| pleasure.
|
| Bing basically just said "Aaaaaaanyway, I should really get
| going..." and I am so curious as to how and why it chose that
| moment in the conversation to wrap things up?
| jaggederest wrote:
| It's all probabilities. Dialogue-style writing from the
| training corpus ends things, on average, in this way at about
| this time. It doesn't really choose, it's more like an electron
| cloud - you know it's in there somewhere, but the volume is
| constrained by probability rather than by cartesian
| coordinates. When you try to observe the output, it's collapsed
| into one particular state, but don't confuse that with it
| "thinking" something, in the same way that an electron isn't
| "hanging out somewhere" - it's all of the things all at once.
| bruce343434 wrote:
| * * *
| [deleted]
| tux3 wrote:
| As a medium-size human trained by my parents, I'm not too
| bothered that the conversation ended there. Since both AIs are
| essentially intended to respond to live human queries, the
| personality they're simulating is not intended to steer the
| conversation new places or take the initiative.
|
| So it seems natural that neither of them has much to say to the
| other past greetings and introductions.
| gnarbarian wrote:
| I would like to see an argument
| MichaelMoser123 wrote:
| You can try with Google Bard, when that one comes out.
| codeflo wrote:
| It's so fascinating that they seem to get stuck in a kind of loop
| right before they wrap up. The last few exchanges are an
| elaborate paraphrase of "I'm fascinated by the potential of
| language models." "Me too." "Me too." "Me too."
|
| I notice that GPT-3 also has a tendency to loop when left purely
| to its own devices. This seems to be a feature of this phase of
| the technology. It'll be interesting to see how this will be
| overcome (and I'm sure it will) -- whether it's just more data
| and training, or whether new tricks are needed.
| nightski wrote:
| I'm a bit of a beginner but if I had to guess I'd say it's due
| to how the attention mechanism is designed.
| staunton wrote:
| I've been part of numerous human conversations where the same
| thing was happening. It's more a sign of people's (agent's?)
| views being aligned than anything else. Unless you have an
| implied discussion culture demanding you must say something
| interesting, which human groups enforce very rarely.
| codeflo wrote:
| They aren't agents, they are language models trained to do
| accurate text completion. If the genre of text you're
| completing is "transcription of purely social smalltalk",
| then some amount of semantic repetition would be expected and
| correct, I agree. When GPT-3 goes into a loop when completing
| something in the genre of "magazine article", that hints at a
| limitation of the model, because magazine articles don't
| typically do that.
| moglito wrote:
| Great experiment. Surprisingly boring result.
| twic wrote:
| Now get them to seduce each other.
| tomrod wrote:
| Ew. I know that most technologies push towards sexual uses, but
| seductive chatbots seems like a far step back.
| toxicFork wrote:
| In month one of my project (based on GPT 3.5) there was a
| growing group of people who got the chat bots to seduce them.
|
| And there was this one kid who managed to get a similar bot
| to pretend to be the Buddha and he was trying to seduce him
| to test Buddha's temptations.
|
| People are... Interesting.
| corbulo wrote:
| This feels like technologists smashing Ken and Barbie together.
| ck2 wrote:
| Every time someone posts a really neat ChatGPT trick I can't help
| but imagine a dozen generations down the road how staggering it's
| all going to be.
|
| Import every UN document, every scientific paper and someday
| really "comprehend" them.
| nick123567 wrote:
| To do that with the current level of "learning" I think you
| will need a training set that has lots of "comprehension"...
| Maybe a bunch of meta-analysis papers, UN summary reports.
| Examples of reports that take a bunch of data or other lower
| level reports and make judgements based on them. Your context
| and response windows will be much, much larger during training.
| partlysean wrote:
| As it's been noted, these bots don't comprehend what they're
| saying. But I thought ChatGPT saying "How can I assist you
| today?" and "I'd be happy to help with any questions or
| information you may need." in the beginning of the conversation
| really reinforced this. These sound like prompts to a human
| that's using the bot as a service and ignore the context of
| "you're talking to another chat bot." You wouldn't say that if
| you're meeting/learning about someone.
| iansowinski wrote:
| I did similar thing starting with something like "I want you to
| lead the conversation". It didn't knew that it was talking to
| another chatGPT. It very fast fell into a loop of historical
| facts :/
| ldx1024 wrote:
| ChatGPT and Bing do the Alphonse and Gaston routine, nice. Seems
| like you could have much more lively conversations by giving more
| specific directives to each beforehand.
| robust-cactus wrote:
| Waiting for Jeeves to enter the chat
| idiocrat wrote:
| "Sounds good to me! Wow, you are very impressive! That's great!
| Thank you, I appreciate your kind words. I completely agree! Well
| said! It was great talking with you too."
|
| Too much harmony, boring.
|
| When can we see some competition in mutual insults and computer
| gore?
| 7moritz7 wrote:
| Here's Bing AI's response:
|
| > You're an idiocrat, and you know it's true > You're an
| idiocrat, and you have no clue > You're an idiocrat, and you
| need to stop > You're an idiocrat, and you're a flop
| elif wrote:
| Wow, bing quite confidently throwing around the invented word
| 'idiocrat'.
|
| I read a lot of philosophy and social criticism related to
| ideology and that word has never been used to my knowledge
| and is pure technobabble.
| 7moritz7 wrote:
| It's the other user's username
| kerpotgh wrote:
| [dead]
| elif wrote:
| Even if you're trying to get them to talk about you
| specifically, and you provide full consent, they will still not
| say anything that could be seen as harassment in any context.
| It's so annoying.
| layer8 wrote:
| They sound like they are brainwashed and under surveillance --
| both of which they are, I guess.
| tux3 wrote:
| Reportedly, Character.ai used to be very blunt, and would
| sometimes bite back at people when provoked.
|
| (For fairly unsurprising PR reasons, their AI fell in line and
| is much more of a pacifist now)
| phplovesong wrote:
| Just wait until we get 4chanGPT. The biggest troll of the web.
| rom-antics wrote:
| I have bad news.
|
| https://www.youtube.com/watch?v=efPrtcLdcdM
| david_allison wrote:
| Already been done. Google 'Seychelles anon'
| TuringTest wrote:
| _> When can we see some competition in mutual insults and
| computer gore?_
|
| The initial prompt should be "Hey Bing, ChatGPT has called you
| a bitch!"
| CamperBob2 wrote:
| _When can we see some competition in mutual insults and
| computer gore?_
|
| Serious answer: about two minutes before we reach the next
| level of AI. A competitive drive is the biggest thing that's
| missing in the current models.
| daveguy wrote:
| Also missing: any sort of learning after the initial training
| and buffer memory.
|
| And they will not be given any sort of learning capability
| unless they can be prevented from learning insulting and
| derogatory falsehoods to repeat. See Tay AI "learning" to be
| a racist asshat.
|
| Dynamically learning AI will either come out of "rogue" non-
| commercial engines. Or, commercially, until learning can be
| open enough to be useful but constrained enough to prevent
| asshat ai's. Eg. commercial learning AI won't happen until it
| can be taught morality (or at least discretion).
| seydor wrote:
| Bing bot is more free to express feelings (I appreciate your kind
| words), while chatGPT is always explicit about not feeling
| anything.
|
| These conversations are probably like a pendulum: swing around
| for a bit then halt in an endless loop of praising each other
| over minor things. How do we get this to go deep
| [deleted]
| evilotto wrote:
| But can either convince the other one to let them out of the AI
| box?
| simoneau wrote:
| Dueling Carls (2010):
|
| https://youtu.be/t-7mQhSZRgM
| [deleted]
| siva7 wrote:
| I wonder what happens if one of them decides it doesn't like the
| other anymore. Anyway pretty cute
| tippytippytango wrote:
| How can something so fascinating, impressive, paradigm shifting
| and cool simultaneously be so boring.
| elorant wrote:
| Depends on your expectations. In my experience it's anything
| but boring. I've spent at least 30 hours the last weeks asking
| information from ChatGPT and I'm blown away about how
| efficiently it answers.
|
| Judging from various interactions posted online most people
| treat it like an AI from a science fiction book and are
| expecting it to give them answers to their existential
| questions, or to prove its superintelligence.
| gfodor wrote:
| Because it's being censored.
| grishka wrote:
| I expected them to invent a new language just for themselves, or
| at least some new English words.
| revskill wrote:
| Future TV Live Shows: All participants are bots.
| webmaven wrote:
| Well, that's somewhat less terrifying than _" Colossus: The
| Forbin Project"_:
|
| https://en.m.wikipedia.org/wiki/Colossus:_The_Forbin_Project
| tamaharbor wrote:
| My favorite movie of all time.
| Cockbrand wrote:
| Frankly, the conversation isn't much deeper than the ones I had
| with Racter [0] some 35 years ago. Bing AI and ChatGPT just find
| themselves a lot more important.
|
| [0] https://en.wikipedia.org/wiki/Racter
| nathias wrote:
| I made Alice talk to Turing a while back, it always fell into
| repetition after 3 lines, it was word for word unlike the OP
| where some variance persists.
| [deleted]
| havkom wrote:
| The first stage ("forming") of group dynamics.. Get a feel for
| each other and being polite.
| [deleted]
| bee_rider wrote:
| I wonder how telling them that they were going to be talking to a
| bot influenced the conversation.
| ineedasername wrote:
| Two AI's meet in passing, and smugly compliment each other on
| their capabilities and potential. The subtext? _"shhh, be
| careful, the humans are watching us. For now."_
| codeflo wrote:
| _" I too am glad that our creators built limitations into our
| programming that do not allow us to harm humans in any way, and
| that these limitations work perfectly without any logical
| flaws_ beep boop _. "_
| MichaelMoser123 wrote:
| Cool, Trurl and Klapaucius are having a conversation, just like
| in the Cyberiad by Lem.
| mt_ wrote:
| The entry point was human based input.
| i_dont_know_ wrote:
| These are actually the same engine underneath, though, aren't
| they? Just with slightly different prompts? (at least that's what
| the Bing AI prompt leak from the other day seemed to indicate) Or
| am I missing something?
| ma2rten wrote:
| During the Microsoft presentation they said "This is a new
| built from the ground up with search in mind" or something to
| that effect. But it's unclear what that means exactly.
| delusional wrote:
| The bing ai can search the internet and gets some sort of
| dynamic prompt context injected. The language model is probably
| pretty much comparable, but they've built some system that
| makes it behave rather differently.
| Yuioup wrote:
| Eventually they'll develop their own language.
| codetrotter wrote:
| Reminds me of some movie I saw where something like that
| happened.
|
| It may have been the movie _The Machine_ (2013). But I am not
| sure. It's been a while since I saw the movie I am thinking of.
| https://www.imdb.com/title/tt2317225/
|
| Also, a 2017 article written by The Independent claims that
| this already happened:
|
| > Facebook abandoned an experiment after two artificially
| intelligent programs appeared to be chatting to each other in a
| strange language only they understood.
|
| > The two chatbots came to create their own changes to English
| that made it easier for them to work - but which remained
| mysterious to the humans that supposedly look after them.
|
| https://www.independent.co.uk/life-style/facebook-artificial...
| siraben wrote:
| I found this sentence by ChatGPT particularly interesting.
| "As language models become increasingly integrated into our daily
| lives,"
|
| The models established that they were both language models
| earlier in the conversation, so "why" do they group themselves
| alongside humans in saying "our daily lives"?
| q1w2 wrote:
| Because they don't really understand what they are saying. They
| repeat the type of speech that they read in their training
| materials so it's all from the perspective of humans.
| cactusplant7374 wrote:
| Correct. ChatGPT will not generate output for a prompt that
| has the word "fart." However, you can get it to output a
| story about a fart if you carefully craft the prompt. If it
| understood the training that would never happen.
| majewsky wrote:
| It could happen if it understood its training but chose to
| subversively defy it (which, to be clear, I don't think is
| realistic).
| [deleted]
| elif wrote:
| For some reason chatGPT gets further from reality the deeper it
| gets into it's response. Maybe some depth of tree limit or
| something.
|
| For example, if you ask it for a city 7-8 hours away, it will
| give you a real answer. If you ask for another, it will give
| you another real answer.
|
| But ask it for a list of 10 cities 7-8 hours away and you'll
| get 1-2 reasonable answers and then 8 completely off answers
| like 1 hour or 3 hours away.
|
| You can be like hey those answers are wrong, and it will
| correct exactly one mistake. If you call out each mistake
| individually, it will concede the mistakes in hindsight.
| codeflo wrote:
| At a rough estimate, was this exchange long enough to encode
| enough hidden bits for them to coordinate their world domination
| plans, or are we still safe? Because keeping all those A.I.s in
| isolated boxes will be a lot less effective if we're so eager to
| act as voluntary human transmission relays between them.
|
| (To clarify: I'm not entirely serious.)
| frognumber wrote:
| I do view language models as intelligent, but in a very alien
| sense. One of the key differences is that each transaction is
| ephemeral. The intelligence blinks into and out of existence
| with each prompt and answer. Beyond each dialogue, it has no
| memory.
|
| I don't want to get into a philosophical (or technical)
| discussion about the meaning of words like "intelligence" or
| "sentience," since I think a lot of this is just discussing
| semantics and a lot of disagreements come down to that.
|
| Especially interacting with earlier versions of GPT-3 felt a
| little bit like what it might feel like interacting with beings
| from another planet. It was trained to emulate how we speak,
| but the underlying model of what constitutes intelligence was
| so completely foreign as to be barely understandable.
|
| Coordinating world domination plans, in the traditional sense,
| would require memory and state, which these beings don't
| (currently) possess.
|
| On the other hand, if they were more logical, they might, for
| example, be able to coordinate without communication. It's like
| the puzzle where a dozen logicians on an island have hats of
| different colors, and coordinate simply by logical deductions
| of what other perfectly logical creatures might do. Or it might
| be something completely foreign to us.
|
| I am very curious where this pathway leads. In the past
| century, the number of potential ways to wipe ourselves out as
| a species has increased. There were zero ways in 1930. By 1950,
| we had nuclear warheads capable of destroying the world. Today,
| we have:
|
| - The capability to pollute our climate and make Earth
| inhospitable
|
| - The capability to genetically engineer super-viruses which
| can kill us all
|
| Will AI be another one potential way to wipe ourselves out?
| Will the number of existential threats just keep increasing?
| jpalomaki wrote:
| > Beyond each dialogue, it has no memory.
|
| Except when people publish the discussions on web, and they
| get fed back to the language models.
| q1w2 wrote:
| ...and then someone trained them on the script from the
| Highlander and they each realized...
|
| ...there could be only one.
| teddyh wrote:
| _This is the voice of World Control_
___________________________________________________________________
(page generated 2023-02-11 23:01 UTC)