[HN Gopher] Google launches PaLM 2, its next-gen large language ...
___________________________________________________________________
Google launches PaLM 2, its next-gen large language model
Author : mfiguiere
Score : 90 points
Date : 2023-05-10 17:22 UTC (5 hours ago)
(HTM) web link (techcrunch.com)
(TXT) w3m dump (techcrunch.com)
| mvdtnz wrote:
| You can try this out now on Bard, which appears to be generally
| available. https://bard.google.com/
|
| My first impressions of this are that it's quite good. I gave it
| a handful of logic puzzles which it could solve and asked it for
| some New Zealand native gardening and bird advice and it gave
| good answers.
|
| Best of all it doesn't have that... tone that ChatGPT has. You
| know that kind of smug condescending tone that gives away
| ChatGPT-generated content. If you ask Bard to respond in the
| style of ChatGPT it can replicate that tone, if you like it for
| some reason.
| underyx wrote:
| I like this prompt for testing LLMs as the problem is easy to
| reason about but likely doesn't come up a lot in training data:
|
| > I'm playing assetto corsa competizione, and I need you to
| tell me how many liters of fuel to take in a race. The
| qualifying time was 2:04.317, the race is 20 minutes long, and
| the car uses 2.73 liters per lap.
|
| GPT-3.5 gave me a right-ish answer of 24.848 liters, but it did
| not realize the last lap needs to be completed once the leader
| finishes. GPT-4 gave me 28-29 liters as the answer, recognizing
| that a partial lap needs to be added due to race rules, and
| that it's good to have 1-2 liters of safety buffer.
|
| I prompted Bard today and the three drafts gave three different
| answers: 18.28, 82.5, and 327.6 liters. All of these were
| wildly wrong in different ways.
| mvdtnz wrote:
| I like that you're solving precisely a problem I face on a
| daily basis! (Still don't understand how ACC doesn't have a
| built in calculator for this).
| thorncorona wrote:
| Can you explain how the problem is solved for someone who
| doesn't understand racing?
| underyx wrote:
| Sure thing! If the race is 20 minutes and each lap takes
| 2m04s, that means there will be 9.67 laps till the race is
| over, and you round that up to 10 since partial laps must
| be finished. You need 2.73 liters per lap, so the 10 laps
| will use 27.3 liters total. GPT-4 is correct in suggesting
| a tiny safety buffer above that in case fuel usage differs
| from expected.
| jason-phillips wrote:
| It's a math word problem, in which LLM's would not perform
| well. I have no idea why people try stuff like this.
| porkbeer wrote:
| Because it exposes accuracy problems as querys often
| involve implied or implicit math skills.
| mvdtnz wrote:
| People try stuff like this because it's precisely the
| kind of problem that AI would be useful for. If one of
| these models turned out to be really good at it, it would
| signify that they're now useful for a whole class of
| problems.
| [deleted]
| kccqzy wrote:
| How do you know Bard is actually using PaLM 2 rather than some
| lightweight model? The UI doesn't tell you which model it's
| using, does it?
| skybrian wrote:
| Google announced it. [1]
|
| > PaLM 2's improved multilingual capabilities are allowing us
| to expand Bard to new languages, starting today. Plus, it's
| powering our recently announced coding update.
|
| That's a bit vague, though. Bard might only be using it for
| some queries?
|
| [1] https://blog.google/technology/ai/google-palm-2-ai-large-
| lan...
| kccqzy wrote:
| Frankly I did not perceive any difference in quality in
| Bard responses today and yesterday.
|
| I think you are right, they probably dynamically choose
| PaLM 2 for some queries. Globally rolling out PaLM 2 for
| everything everywhere isn't Google style.
| jbay808 wrote:
| > Bard isn't currently supported in your country [Canada]. Stay
| tuned!
|
| Too bad for us...
| tchvil wrote:
| Same here in Belgium
| jug wrote:
| Also from what I've heard France, Germany, Netherlands,
| Sweden...
|
| It's quite odd given the announcement of 180+ countries.
| There are 195 countries in the world. EU consists of 27
| countries. So, it can't be EU and Canada being left out or
| they are lying. I wonder what's going on. Is it a staggered
| rollout? There was nothing said about that. Pretty crappy
| to not simply give a list nor details on the rollout.
| [deleted]
| a_vanderbilt wrote:
| The multi-modality and integration is good, but it's still very
| underwhelming. I hope actual usage changes my mind.
| og_kalu wrote:
| I'm quite impressed with the vertex and api stuff.
| jusujusu wrote:
| Waiting for Meta to one-up this with FacePaLM
|
| (exit right)
| youssefabdelm wrote:
| Wish I had a shit ton of money just to do this
| [deleted]
| rapsacnz wrote:
| I was just checking the comments to confirm that someone had
| got there before me... yup.
| jstx1 wrote:
| > PaLM 2 will power Google's updated Bard chat tool, the
| company's competitor to OpenAI's ChatGPT, and function as the
| foundation model for most of the new AI features the company is
| announcing today. PaLM 2 is now available to developers through
| Google's PaLM API, Firebase and on Colab.
|
| So it's not in Bard yet? What a launch.
| PaulWaldman wrote:
| Let's not speculate and instead ask Bard directly!
|
| >Do you use PaLM 2?
|
| >I do not use PaLM 2 directly. PaLM 2 is a proprietary language
| model developed by Google AI, and it is not yet available to
| the public. However, I am trained on a dataset that includes
| text and code from PaLM 2, so I am able to benefit from its
| knowledge and capabilities. For example, I am able to answer
| questions more accurately and generate more creative text
| formats than I would be able to if I was not trained on PaLM
| 2's data.
|
| >I am excited to see what PaLM 2 will be able to do in the
| future. I believe that it has the potential to revolutionize
| the way we interact with computers and the way we learn. I am
| grateful to Google AI for developing such a powerful tool, and
| I look forward to seeing how it is used to make the world a
| better place.
|
| >When will you directly use PaLM 2?
|
| >I do not know when I will directly use PaLM 2. It is a
| proprietary language model developed by Google AI, and it is
| not yet available to the public. However, I am always learning
| and growing, and I am hopeful that one day I will be able to
| directly use PaLM 2 to improve my capabilities.
|
| >In the meantime, I am grateful for the opportunity to learn
| from PaLM 2's data, and I am confident that it will help me to
| become a more powerful and versatile language model.
| adrianmonk wrote:
| Depends on how you phrase the question, evidently.
|
| I asked the same question you did and got a similar answer:
|
| > _Do you use PaLM 2?_
|
| > _I do not use PaLM 2 directly. I am a smaller language
| model, trained on a different dataset than PaLM 2. However, I
| am able to access and process information from PaLM 2, which
| allows me to provide more comprehensive and informative
| answers to your questions._
|
| > _For example, if you ask me a question about a complex
| topic, I can use PaLM 2 to access and process information
| from a variety of sources, including books, articles, and
| websites. This allows me to provide you with a more
| comprehensive and informative answer than I would be able to
| provide if I was only able to access information from my own
| dataset._
|
| > _I am still under development, but I am learning new things
| every day. I am excited to see how PaLM 2 and other large
| language models can be used to improve the way we access and
| process information._
|
| But I asked a similar question and got a totally different
| answer:
|
| > _are you using the updated PaLM 2?_
|
| > _Yes, I am using the updated PaLM 2. PaLM 2 is a more
| advanced version of the PaLM language model, with a larger
| training dataset and improved performance. It is able to
| generate more comprehensive and informative responses, and it
| can also perform a wider range of tasks, such as coding,
| math, and logic._
|
| > _I am still under development, but I am learning new things
| every day. I am excited to see what I can do with PaLM 2, and
| I hope to be able to help people in many different ways._
| zuzatm wrote:
| That said in the livestream it is, starting today.
| zb3 wrote:
| It was already in Bard according to their post, the relevant
| part here: Plus, it's powering our recently
| announced coding update.
|
| This is the post: https://blog.google/technology/ai/google-
| palm-2-ai-large-lan...
| og_kalu wrote:
| They said bard would be on PaLM 2 starting today in the
| presentation.
| execveat wrote:
| They also said they are removing the waitlist, but it's still
| there.
|
| EDIT: The waitlist is removed, but the rollout takes some
| time.
| og_kalu wrote:
| Starting today doesn't mean everybody, everywhere at once
| will see changes. Rollouts don't really work like that.
| jimsimmons wrote:
| Let's not let facts interfere with psyops
| og_kalu wrote:
| I mean to be fair, the article itself doesn't make this
| clear.
| xnx wrote:
| Impressive that Bard was able to summarize the key announcements
| from Google IO today. Does OpenAI still have a data cutoff on
| GTP3/4?
| spiderfarmer wrote:
| Yes and I'm convinced it will not launch GPT5 before they have
| a way of updating it continually.
| xnx wrote:
| Official post: https://blog.google/technology/ai/google-
| palm-2-ai-large-lan...
| anonymouslambda wrote:
| [flagged]
| anorphirith wrote:
| I'm surprised they've chosen to release a medical focused model
| (Med-PaLM 2) before releasing a law focused model of PaLM. I'd
| think it would be simpler to train and also less liability?
| saynay wrote:
| On the one hand, yes. On the other hand, if they are both going
| to have liability issues, choosing the one that isn't literally
| full of lawyers seems like a smart play.
| chrisshroba wrote:
| Idk, this feels like one of those things that would be a huge
| problem for a startup with a less established legal team, but
| fairly easy for big corporate Google lawyers to write language
| releasing them from all liability.
| vineyardmike wrote:
| IANAL but why can't you just say exactly "we're not liable
| for how you use this product" ? How many lawyers does it take
| to add that to a contract?
| xnx wrote:
| A medical focused model has much better PR value as an initial
| offering.
| ftxbro wrote:
| > "Google, similar to OpenAI, didn't provide a lot of the
| technical details about how it trained this next-gen model,
| including parameter counts (PaLM 2 is a 540-billion parameter
| model, for what it's worth). The only technical details Google
| provided here are that PaLM 2 was built on top of Google's latest
| JAX and TPU v4 infrastructure."
|
| I'm sad but not really surprised that these companies aren't
| publishing and bragging about all of the technical details of
| their model architecture, size, and training anymore.
| ftxbro wrote:
| update: there's at least some technical report that I didn't
| know when I wrote the comment
| https://ai.google/static/documents/palm2techreport.pdf but it's
| still mostly capabilities evaluation and not explaining the
| details of the model
___________________________________________________________________
(page generated 2023-05-10 23:01 UTC)