[HN Gopher] LaMDA: Google's New Conversation Technology
___________________________________________________________________
LaMDA: Google's New Conversation Technology
Author : rnd2
Score : 183 points
Date : 2021-05-18 17:28 UTC (5 hours ago)
(HTM) web link (blog.google)
(TXT) w3m dump (blog.google)
| zitterbewegung wrote:
| If you want a freely available system that attacks this problem
| then Blenderbot is a good choice (made by Facebook)[1] . It is
| also freely available and integrated with Huggingface.
|
| [1] https://ai.facebook.com/blog/state-of-the-art-open-source-
| ch... [2] https://huggingface.co/facebook/blenderbot-3B
| moralestapia wrote:
| >"I just started taking guitar lessons."
|
| >You might expect another person to respond with something like:
|
| >"How exciting! My mom has a vintage Martin that she loves to
| play."
|
| Not really IMO, I think they just pushed it to just say
| "whatever's related to the topic". People who behave like that
| irl come off as self-absorbed and obnoxious most of the time,
| hope future AIs are not like that!
| crsr wrote:
| What would your response be?
| inssein wrote:
| That's amazing! What have you learned to play so far?
|
| ---
|
| You could even ask to hear them play when you see them next.
| moralestapia wrote:
| "Hey, that's nice", "How's that going?" or something like it.
|
| Also, no response is also a response, you could just listen
| to them. No one's going to say "I just started taking guitar
| lessons" and remain silent for the rest of the evening; for
| sure they have much more to say about it.
| onlyrealcuzzo wrote:
| Wait.
|
| Why is it self-absorbed for someone to include themselves
| in the conversation, but it's not self-absorbed for you to
| come up to me and talk to me about your guitar playing?
|
| You can come up to me and say, "Hey, guess what!? I just
| started playing guitar!" And I can't say, "No kidding? My
| mom just started playing guitar, too!" or, "No way! I just
| started drumming a few weeks ago. Maybe we can jam soon?"
|
| How is my response "self-absorbed" but you starting a
| conversation about yourself isn't?
|
| If you start the conversation, I'm just supposed to keep
| asking you questions about your experience until you're
| done talking about it?
|
| I mean, if we're friends, shouldn't you care about what I
| think is relevant as much as I should care about what you
| think is relevant?
|
| And anyway, if you _really_ want to talk about how it's
| going - can't you just take the conversation there?
|
| I understand if you say something like, "Nice. I've always
| wanted to learn to play, and I never thought I could, and
| already I've learned so much, and I'm getting close to
| being able to play my favorite song."
|
| And I say, "Oh, cool. Hopefully you can play my favorite
| song next. But it's really hard, you'll probably need a few
| years. You know, I used to be friends with Kurt Cobain's
| assistant, right? That's how I got into music in the first
| place....."
|
| ^ That seems self-absorbed. The first seems like genuine
| conversation.
| pfortuny wrote:
| So much this! Nothing like "are you enjoying them?"... Totally
| weird.
| Stratoscope wrote:
| I read that reply rather differently. It didn't sound self-
| absorbed, as if they were saying "let's talk about me, not
| you." It was more like "how cool, we have a common interest,
| lots to talk about!"
|
| I could imagine better responses, but I didn't see much wrong
| with that one.
| yumraj wrote:
| Exactly!
|
| I'll probably say, "Cool!!.", "Electric or Acoustic?", "Why
| guitar?", "How are you finding it, easy or hard - I've been
| meaning to take lessons". "Do you like your teacher?" ....
|
| I guess, if my mom played and had a vintage Martin then... who
| knows?...
|
| I believe the primary issue with conversational AI is that, at
| best, is 1-person, while humans are infinite-people. Hell, even
| a single person is multiple-people depending on the context and
| mood at that moment and other factors - in the sense that
| depending on context/mood/etc. a person may reply in different
| ways.
|
| Conversational AI cannot capture that, it can be simply a
| drone..
| Mizza wrote:
| I came here to point this out - this is how people from
| California communicate. They simply take turns talking about
| themselves, orbiting around the same subject until somebody
| says something with enough gravity to drag the conversation
| into its next orbit. It was really off-putting when I first
| arrived, I got used to it over time, though I still don't like
| it and it's sad to see it used as the default model for this
| type of application.
|
| California is Borg. All will be assimilated.
| thanhhaimai wrote:
| I'm not sure this is something unique to California.
| Travelling around, it seems that it's more correlated to
| people's income level. Richer people tend to talk about
| themselves more, in both Asia and EU. Maybe it's because they
| are able to experience more given their wealth/status, thus
| having more topic to talk about. Or maybe they get wealthy
| because of that self-centric attitude.
|
| In short, I don't think it's a California unique
| characteristic.
| TeMPOraL wrote:
| > _Richer people tend to talk about themselves more, in
| both Asia and EU. Maybe it 's because they are able to
| experience more given their wealth/status, thus having more
| topic to talk about. Or maybe they get wealthy because of
| that self-centric attitude._
|
| Interesting. My experience (Poland/EU) is pretty much the
| reverse. That is, I expect a richer person to take less
| about _themselves_ , and more about the topic at hand. That
| is (generalizing), a rich person will say things like,
| "$Person is a great $X-maker, you should check them out",
| or "yeah, X is great, especially ${some refined form}",
| whereas a less well-off person will keep saying things like
| "yes, I love X", or "I hate X", or "I'm taking X classes",
| or "I know ${list of famous people doing X}". My
| provisional explanation is that the rich person doesn't
| feel the need to prove anything to you, or to themselves,
| so they can just talk about the topic itself, instead of
| trying to position themselves within it.
| moralestapia wrote:
| >They simply take turns talking about themselves ...
|
| This so much!
|
| One thing about the UX of all conversational products, in
| stark contrast with real conversations, is that it forces you
| into a dynamic of "one question, one answer", "one message,
| one reply". I can imagine it could be pretty hard to break
| this paradigm, but for me it's like the equivalent of the
| uncanny valley in this context. No matter how good the
| replies are, if I'm forced into this dynamic, I just know
| that I'm talking to a robot.
| goliatone wrote:
| I often interpreted this as a conservative interpersonal
| communicative approach. It is safer to talk about yourself
| than to ask questions from people because there risk of
| inadvertently saying something offensive.
|
| Sure, you will find people that are a bit more self absorbed
| and would just pong about themselves.
|
| But there are so many conversational constraints- asking
| "why" is aggressive, direct feedback is discouraged, slipping
| pronouns or other identity traits is a minefield, etc. You
| should not follow up when others say something because they
| were being polite and you're putting them in evidence...
|
| If you get to engaged in a conversation people can feel
| weirded out, like you're supposed to go through the motions
| but that's it.
|
| Not talking also can get you in trouble.
|
| Talking about yourself is probably the safest thing to do.
| notyourwork wrote:
| Midwest small talk is something to be cherished. I remember
| growing up having pleasant conversations with people. Moving
| west I found that conversations are more like a combative
| 1-upping of each other. It gets tiring listening.
| toxik wrote:
| I think it depends on context. Idle chat at the coffee machine?
| Fair game to inject trivia like that. Somebody who maybe is
| looking for support in their new hobby? Different altogether.
| optymizer wrote:
| A lot of this is cultural, imo. I grew up in Eastern Europe,
| where it's very common to have two people talking exactly in
| that way about a common topic: "X? Yeah, my mom Y".
|
| It took me a while to learn that I can come across as self-
| absorbed and obnoxious in the US when speaking that way, so I
| have to consciously remember to ask questions about the person
| or the topic they presented, and avoid introducing my own
| related topic, but then I never know when it's OK or how to
| actually switch topics (since, like the article mentions,
| conversations do flow from on topic to another) and I become
| anxious during the conversation.
|
| Even this comment is about me. Could someone suggest how I
| could have responded in a way that doesn't sound self-absorbed?
| simtel20 wrote:
| First, please don't take this as a personal criticism, or
| that I'm saying that you do sound self-absorbed - we are in a
| comment section where people share their own experiences and
| opinions, so I don't think that expressing yourself in the
| first person is in any way negative.
|
| However, to answer your question about how to appear less
| "self-involved", perhaps there's a structural trick you can
| use, which is to shift some of your writing from first-person
| to second-person and speak from the outside? How does this
| sound to you:
|
| "It can take a while to learn that this can come across as
| self-absorbed and obnoxious in the US when speaking that way,
| so I've learned to consciously remember to ask questions
| about the person or the topic they presented, and void
| introducing related topics. This does make it more
| challenging to know when it's OK or how to actually switch
| topics (since, like the article mentions, conversations do
| flow from one topic to another) and this uncertainty can be
| anxiety-inducing"
|
| I'm not saying it's essential, but as an exercise if you
| imaging talking about the situation that way, it can
| influence the tone.
|
| I will repeat that I think your fear about sounding self-
| absorbed does not come out, to me, in your paragraph. I have
| worked with colleagues whose conversations do wander off on
| their own tangents if they're not asked to focus, and that
| may be similar to what you're talking about? But I can't be
| sure. If so, that is the difference between conversation for
| its own sake where sharing and opinion and life experiences
| are related, and workplace conversations which often have an
| objective, etc.
| drusepth wrote:
| Yeah, I grew up in the Midwest and even there it's super
| common to respond to X with a personal anecdote about X.
|
| I think it makes the conversation more personal. Anyone can
| say, "Oh cool! What songs have you learned?" Only _you_ (you
| know, the friend they chose to talk to) can say, "Oh, that's
| cool! My grandpa gave me a cool guitar that I've been meaning
| to learn to play."
|
| I find the best approach is to sandwich a personal anecdote
| between two "you" statements for the people who need a prompt
| to carry the conversation. For example, "That's really
| interesting. I've also been thinking about learning guitar.
| How are you liking your lessons?"
| lucasmullens wrote:
| I only find that obnoxious when they come of as "one-upping"
| the other person.
|
| Maybe I'm wrong and people secretly think I'm obnoxious, but
| I think it's a good thing to quickly mention something like
| "my mom Y" to show your familiarity with a topic, so the
| other person knows how much detail they need to give when
| talking about it. Like, if they're explaining what a certain
| company does in detail, and your mom works there, you should
| probably let them know before they explain a ton of stuff you
| already know.
|
| I think the key is just to steer the conversation back
| quickly. If you derail it talking about yourself, steer it
| back with something like "So how did you like X?"
| skybrian wrote:
| Technically, that example is supposed to be something a person
| might say. They didn't claim that LaMBA said it.
|
| If the computer says it then it's confabulating. It doesn't
| have a mother.
|
| Unless they're working on role-playing, maybe it would be
| better to somehow remove responses that a computer shouldn't
| say from the training set?
| karolist wrote:
| Spot on! A close friend might respond with something like "oh
| really, how do you find the time between work and <insert other
| things>" or "nice, are you taking classes or using some
| resources online?" and continue from there. The style where one
| responds with "nice, now switch back to MY story!" is natural
| only in some parts of US or superficial work relationships...
|
| If such models condition all humans to converse like that then
| the future is even more lonely, despite us all being connected
| all the time.
| robertlagrant wrote:
| > People who behave like that irl come off as
|
| Perhaps it would be better to specify that that's how you
| interpret this. Your own proclivities do not represent the
| world.
| moralestapia wrote:
| IMO means In My Opinion
| robertlagrant wrote:
| And that related to the "Not really". Not the bit I was
| talking about:
|
| > Not really IMO, I think they just pushed it to just say
| "whatever's related to the topic". People who behave like
| that irl come off as self-absorbed and obnoxious most of
| the time, hope future AIs are not like that!
| unknown_error wrote:
| Is this something that will eventually get worked into
| Dialogflow?
| ArtWomb wrote:
| "I am not just a random ice ball. I am actually a beautiful
| planet... I don't get the recognition I deserve. Sometimes people
| refer to me as just a dwarf planet" -Pluto
|
| It may just be mimetic. But it feels like a Language Model with a
| nuance for human emotion ;)
| andrewtbham wrote:
| I have been researching making a therapy chat bot for years...
| it's going to happen.
|
| https://github.com/andrewt3000/carl_voice
| furgooswft13 wrote:
| Uhm excuse me, I've been talking to Dr. Sbaitso for decades.
|
| He cured my potty mouth, THAT'S NOT MY PROBLEM
| vessenes wrote:
| I have a reasonably functional one built on top of GPT-3 -
| message me if you'd like to talk.
| drusepth wrote:
| A few years ago I also worked on a few passive bots that'd
| try to find at-risk or self-harming people on reddit/twitter
| that might be in need of therapy or someone/something to talk
| to.
|
| It was too much emotional work to work with the training data
| so I haven't worked on it in a long time, but if something
| like that would be helpful, feel free to message me as well.
| :)
|
| Or, if there's some community somewhere where people trying
| to solve this common problem gather, I'd love to hear about
| it / join!
| wing-_-nuts wrote:
| To be honest, the thought of AI doing therapy (and it's
| reliance on big data) sounds very black mirror to me.
|
| How long do you think it'll take ad companies to do an end run
| around patient protections and start serving ads based on
| "metadata"? Got anxiety? Try our new meds! Worried about the
| future? How about some life insurance or annuities? Etc. Things
| will get very exploitative very fast.
| [deleted]
| afro88 wrote:
| Worse than that: as an influence on the conversational AI
| model, to push certain products by weighting responses that
| recommend them higher. Less hassle than throwing a conference
| for doctors and sending reps out to give them free stationery
| every month
| zitterbewegung wrote:
| This already happens (but less targeted). I think that's why
| Australia banned pharmaceutical ads.
| https://theconversation.com/drug-ads-only-help-big-
| pharmas-b...
| yupper32 wrote:
| Tech companies don't need a specific therapy app to know if
| you have anxiety.
|
| If they wanted to be nefarious in this way, they could just
| mine your chats and emails and get a great idea about your
| state of mind. They wouldn't even have to get around
| potentially legally privileged information.
|
| My go-to related example of this: People often worry that
| Facebook is listening to their conversations because they'll
| see an ad related to a conversation soon after. You probably
| shouldn't worry that they might be listening to you (because
| they're not), but you might want to worry that they didn't
| need to listen to you to know what ad to serve you.
| fiddlerwoaroof wrote:
| I think the "Facebook is listening" position is common
| because realizing how predictable people's current
| interests are is a bit unsettling.
| TeMPOraL wrote:
| I think what's even more unsettling is realizing people's
| interests aren't as much predictable as they're _being
| made predictable_.
|
| My wife and I, we experience a "Facebook is listening"
| event roughly every month these days. Of course I know
| they aren't, but these too-well-targeted ads or post
| recommendations aren't about generic, easy-to-guess
| interests - they're about completely random things that
| we just happened to spontaneously and briefly talk about
| that same day. The most likely explanation to me is that
| those conversation topics _weren 't actually spontaneous_
| - that is, we were already primed, nudged to talk about
| this, and while we didn't know that, ad targeting
| algorithms did.
| silicon2401 wrote:
| The concept predates black mirror by decades. ELIZA was an
| "AI" used for therapy in the 60s:
| https://en.wikipedia.org/wiki/ELIZA
|
| Learned about it in the documentary Century of the Self, an
| outstanding work.
| xpe wrote:
| > The concept predates Black Mirror by decades.
|
| Black Mirror was mentioned not because it pioneered this
| idea but because each episode shows a possible dystopian
| future associated with various technological advances.
| visarga wrote:
| Where's the sauce? Oh, I get it. It's teasing.
| epr wrote:
| We really need to stop calling things "lambda". You'd think
| Google would grasp the concept of unique names on the internet,
| but I suppose they can always push their own stuff to the top.
| jsnell wrote:
| Unless the title has a typo, this is not called "lambda". It's
| "lamda".
| epr wrote:
| _searches on google_
|
| google: did you mean lambda?
| adrianmonk wrote:
| I mean, you could have.
|
| If they had returned, "Showing results for _lambda_ ",
| "Search instead for _lamda_ ", then that would have
| definitely been a problem.
| anonagoog wrote:
| This anodyne name is likely because there was a small internal
| shitstorm over calling it Meena because some folks were tired
| of chat bots being given women's names
| ksenzee wrote:
| As a woman I appreciate it. I get so tired of things we order
| around being named after women.
| ocdtrekkie wrote:
| IMHO, we probably should try to get to the point that
| bots/assistants are less "a thing we order around" and "a
| thing we communicate with", if partially because of the
| pretty bad vibes of "creating slaves" or being comfortable
| ordering such programs around, regardless of the degree we
| are confident we haven't accomplished computer sentience.
| It's just a bad pattern to be in. I recall a lot of work
| going into making Cortana refuse to tolerate abusive
| language towards it.
|
| Maybe Google has it right with not anthropomorphizing
| Google Assistant and giving it a human name, but in that
| case LaMDA is a move in the wrong direction it seems: The
| conversations demonstrated emulated emotion in a way that
| generally speaking, most bots do not.
|
| Is there any work on making difficult-to-gender voices for
| bots that still sound human? Maybe if we were to combine
| names that aren't traditionally given to humans with a
| voice that is hard to gender? Should we intentionally make
| bots/assistants sound more robotic, so they do not try to
| "sound human"?
|
| I think it's kinda an interesting question to figure out
| what the best practice should be moving forwards.
| minimaxir wrote:
| My initial impression from the Google I/O presentation was that
| this was a precursor to a GPT-3 API competitor product to OpenAI
| (it certainly has the same generative quirks as GPT-3!), however
| this PR implies it's just for conversation and may not be
| released for public use.
| raphlinus wrote:
| Disclosure: work at Google, opinions my own.
|
| I've had a chance to play with this and it's eerie how good it
| is. Most impressive to me, it was able to defend itself
| persuasively when confronted with evidence of how its previous
| answers were inconsistent (in the version I played with, it tends
| to be very coherent but frequently wrong).
| [deleted]
| joe_the_user wrote:
| Does it have to defend itself a lot?
|
| I've noticed contradictory statements are kind of standard for
| GPT-3. I can well believe you could come up with a system
| that's better at defending itself against the charge of
| contradiction than it is at avoiding contradictions in the
| first place.
| ctoth wrote:
| I've noticed contradictory statements are kind of standard
| for humans. I can well believe you could come up with a
| system that's better at defending itself against the charge
| of contradiction than it is at avoiding contradictions in the
| first place.
| raphlinus wrote:
| In this case, I was pushing it quite a bit to see how it
| would respond. Most of the time, it responds quite
| confidently so it sounds right, even if it isn't, ie about
| the median of what you'd expect for a Hacker News comment.
| TulliusCicero wrote:
| > Does it have to defend itself a lot?
|
| I think it's a common weakness of chatbots that you can get
| them to contradict themselves through presenting new evidence
| or assertions, or even just through how you frame a question.
| I found LaMDA to be much more resistant to this than I
| anticipated, when I tried it out (I'm a Googler).
|
| It wasn't _completely_ immune -- I was eventually able to get
| it to say that, indeed, Protoss is OP -- but it took a long
| time.
| TulliusCicero wrote:
| Also a Googler, and same.
|
| I tried to get it to say controversial/inflammatory/bigoted
| things with leading questions, or questions where the framing
| itself was problematic, and it was quite adept at
| deflecting/dodging/resisting. I was very surprised.
| waynesonfire wrote:
| > Disclosure: work at Google, opinions my own.
|
| Disclosure: don't care.
|
| why do only google employees do this?
| detaro wrote:
| It's not only Google employees.
|
| And if people don't, others jump in to point out that they
| are "only saying good things because they work there".
| waynesonfire wrote:
| they should add it to their profile.
| detaro wrote:
| They have. That's where the people who then gleefully
| point it out go to find it out.
| freedomben wrote:
| If a person has a conflict-of-interest I still like to hear
| from them, but it's nice to disclose that. I comment on Red
| Hat news from time to time but I always disclose that I work
| there and that my opinions are my own. Even with the
| disclaimer sometimes people will accuse me of being a shill
| or like I'm the public face of Red Hat.
| desmosxxx wrote:
| it's a disclosure? conflict of interest? people wondering how
| they got to test it? seems like very relevant information.
| shawndumas wrote:
| Disclosure: I work at Google too and my opinions are also my
| own.
|
| We are told internally that when we discuss Google related
| matters online that we should be cognizant of the
| what/where/how/why of our communication. Disclosing is--for
| me--both a reminder for myself and an attempt at a hedge
| against accidentally representing Google.
| waynesonfire wrote:
| Sure, you should be cognizant of what you say online
| otherwise you'll get smacked with the cancel culture
| hammer. This is regardless of where you work.
| zamadatix wrote:
| The net of what you should be cognizant of when providing
| insider information on a product from your company or
| commenting on a topic it is heavily involved in is wider
| than the single issue you eagerly dredge up.
|
| E.g. the last thing you want from your comment about
| using <new product> early internally are news articles
| about "What Google plans on doing with <new product>"
| when you were just trying to give your personal opinion
| on where the technology might go.
| shawndumas wrote:
| To echo this comment, zamadatix is exactly right; I need
| someone quoting me about a Google project as an official
| representative about as much as I need another hole in my
| head.
|
| As to the greater point, I want to emphasize that as a
| human being I want to be careful with how I communicate
| with others because I care about them not feeling judged,
| dismissed, othered, or otherwise insulted. The charge
| that the rules keep changing is overblown; it's always
| about respecting people even if you don't understand or
| enjoy them.
|
| I know that some might still equate that with "cancel
| culture" but I can't help that (and tbh that's a
| discussion that we should have over beers and not on the
| internet).
| Traster wrote:
| FWIW Intel engineers are meant to use the hashtag #iamintel
| to indicate their role, see it a lot on LinkedIn
| darepublic wrote:
| Hiding innocently on the front-page of HN is the bombshell
| 6gvONxR4sf7o wrote:
| I couldn't find a paper for this one. Can anyone else?
|
| Dialogue really is a massive missing piece in NLP (and machine
| interaction in general). Without dialogue, we're giving
| algorithms much harder tasks than we give people. A person can
| always ask clarifying questions before answering. (Dialogue is of
| course much harder for other reasons)
|
| So it'd be really nice to read about their self-described
| "breakthrough" in more concrete terms than a press release that
| reads more like an advertisement than anything else.
| Rebelgecko wrote:
| https://arxiv.org/abs/2001.09977
| 6gvONxR4sf7o wrote:
| I believe that's a just a predecessor to LaMDA.
| [deleted]
| msoad wrote:
| How's this paper's reproducibility? Because more and more I see
| Google publishing papers with internal data and telling others
| "if you had this data you would be able to reproduce this
| results"
| phreeza wrote:
| Isn't it the same in many cases? E.g. the discovery of the
| higgs boson is only possible with access to a large particle
| accelerator, many astronomy studies are only reproducible with
| access to a large telescope, and some ml studies are only
| reproducible with access to a large dataset. Sure it's not
| ideal, but it's better than not publishing the results at all.
| saikan wrote:
| I don't think there is a 1500 feet paper plane record.
| easton wrote:
| How long until this is integrated product side? It seems like
| they bring up these really cool demos that seem like the Star
| Trek computer every couple years, but Assistant can't do a whole
| lot more than Siri or Alexa still.
| arebop wrote:
| IME, Assistant did do a whole lot more than Siri or Alexa in
| the past. It had a similar set of features, but with Assistant
| I could speak casually and with Alexa/Siri it was necessary to
| learn specific trigger phrases. More recently I find that Alexa
| has caught up. And maybe Assistant has gotten worse. Not sure
| about Siri.
| afro88 wrote:
| Can assistant make calls to book your haircut with a real
| person yet? That was hugely exciting at a google io demo
| about 3-4 years ago
| Invictus0 wrote:
| I used it to make reservations at an Indian restaurant a
| few days ago. I think it's still limited to the restaurant
| use case.
| Moosdijk wrote:
| >Display of empathy (Empathetic Dialogues)
|
| >Ability to blend all three seamlessly (BST)
|
| These are the same links
| mark_l_watson wrote:
| The animated figure is excellent. Interesting to train on
| dialogue text to model: structure of statements and response, and
| a model for predicting good responses. I am not surprised that
| this works really well.
|
| Getting data to train BERT is fairly simple: any text can be used
| to drop out words and predict them. The LaMDA training text must
| have been more difficult to curate.
| vzaliva wrote:
| People will have a hard time googling it :)
| nmfisher wrote:
| Is there a paper or other blog post that goes along with this to
| explain the Lamda architecture? Otherwise this is mostly
| marketing fluff.
| spamalot159 wrote:
| Yeah, I was expecting to be able to maybe play with it or at
| least see some real details. Not much here at the moment.
| kyrra wrote:
| Googler, opinions are my own. (I don't work on this thing)
|
| This is the next steps of meena:
| https://ai.googleblog.com/2020/01/towards-conversational-age...
|
| This is a real thing, but it just hasn't been opened outside of
| Google yet.
___________________________________________________________________
(page generated 2021-05-18 23:00 UTC)