[HN Gopher] Bard coding update with Colab exporting
       ___________________________________________________________________
        
       Bard coding update with Colab exporting
        
       Author : alphabetting
       Score  : 129 points
       Date   : 2023-04-21 13:45 UTC (9 hours ago)
        
 (HTM) web link (blog.google)
 (TXT) w3m dump (blog.google)
        
       | renewiltord wrote:
       | It actually got much faster and much better at code. Impressive!
        
       | srameshc wrote:
       | I am using it right now to debug some of my Go and SQL code and
       | it's works great.
        
       | butz wrote:
       | Could someone ask Bard how to bypass its region-lock?
        
         | bl4ck_goku wrote:
         | I used a VPN.
        
       | londons_explore wrote:
       | Just did a comparison of Bard vs GPT-4 with a coding question I
       | came up with:
       | 
       | > I have a 3 dimensional black bitmap (voxel map) with a white
       | straight line drawn in it. The 3d bitmap is saved as a bunch of
       | tiff files by layer eg. layer_347.tif Write me python code to
       | load this data and return the x,y,z location of each end of the
       | line. The bitmap is too big to load all of it into RAM at once.
       | Use a binary search to avoid loading unnecessary files.
       | 
       | Bard came up with a solution which was close but didn't follow
       | the instructions (it just loaded all the data). It also assumed
       | layer_9 sorted before layer_10.
       | 
       | GPT-4 came up with a solution which followed the instructions,
       | but had a buggy binary search implementation, and a suboptimal
       | O(n).
       | 
       | The GPT-4 solution was closer to what I was after, but there
       | wasn't much in it. Neither implementation handled edge cases like
       | missing files or had tests or anything.
        
         | belltaco wrote:
         | > Neither implementation handled edge cases like missing files
         | or had tests or anything.
         | 
         | Did you tell them to do those things after they came up with
         | the solution?
        
           | ThalesX wrote:
           | I'm sorry for the confusion. Here's a version that handles
           | edge cases:
           | 
           | ```
           | 
           | // Basically still not handling edge cases
           | 
           | ```
        
       | coding123 wrote:
       | [flagged]
        
         | zb3 wrote:
         | Also this nonsense geoblock that still exists. I've already
         | joined waitlist via Tor and then I was granted access. Yet when
         | I visit bard.google.com without Tor I'm still informed that
         | bard isn't supported in my country.
        
           | benatkin wrote:
           | I like not being able to sign up because I'm under 18.
           | Preferably in hexadecimal. https://www.reddit.com/r/ChatGPT/c
           | omments/11ydh2y/im_trying_...
        
           | lyu07282 wrote:
           | [flagged]
        
           | JLCarveth wrote:
           | I have full access to Bing Chat in Canada, but not to bard.
           | Disappointing.
        
         | quadrature wrote:
         | I think this is fine, the problem is that they aren't calling
         | it a beta/preview which would temper people's expectations.
        
           | thebestmoshe wrote:
           | > Bard is still an early experiment, and may sometimes
           | provide inaccurate, misleading or false information while
           | presenting it confidently. When it comes to coding, Bard may
           | give you working code that doesn't produce the expected
           | output, or provide you with code that is not optimal or
           | incomplete. Always double-check Bard's responses and
           | carefully test and review code for errors, bugs and
           | vulnerabilities before relying on it.
        
           | danpalmer wrote:
           | It's labelled as "Experiment" in the titlebar, it says that
           | it has limitations and that feedback will help improve, and
           | it says that it may display inaccurate information. I think
           | that covers it all pretty well.
        
             | quadrature wrote:
             | nice, missed that on the bard site.
        
       | heliophobicdude wrote:
       | Something I am just now noticing is that Barb is able to view the
       | contents of a pdf that I provided a link to.
       | 
       | Could be making it up but need to test further...
        
       | karussell wrote:
       | At least Google seems to respect open source unlike OpenAI: "If
       | Bard quotes at length from an existing open source project, it
       | will cite the source."
        
         | indymike wrote:
         | So... does that imply that it simply fails to cite where it's
         | quoting from closed source projects?
        
         | benatkin wrote:
         | That's also likely a better way to exploit open source.
         | 
         | Microsoft and OpenAI got a lot right but also a lot wrong, I
         | think. This could have held Copilot back, not making proper
         | composite projects.
        
       | pnathan wrote:
       | I'm experimenting with using Bard to generate Prolog code. I have
       | this idea that if I can synthesize Prolog and a LLM, I can do
       | some very clever planning work.
        
       | binkHN wrote:
       | Show me the API.
        
       | UncleOxidant wrote:
       | I just gave Bard a try. It's still nowhere near as good at coding
       | as chatGPT (and GPT4 specifically). I asked it to write a Python
       | program to solve the Traveling Salesman Problem using ACO. The
       | code GPT4 gave me for this prompt runs out of the box which is
       | pretty impressive. The Bard code, on the other hand, has errors
       | like forgetting to pass an argument to a function, or calling a
       | method on a class that hasn't been defined on that class. When I
       | point out the error to Bard it presents code that it says solves
       | the problem, but often doesn't. In some cases when it fixes a
       | problem in one area it reintroduces a previously fixed bug
       | elsewhere.
       | 
       | Also, chatGPT always goes the extra mile and includes
       | instructions on compiling/running the code it provides (including
       | sample data). Bard does not.
       | 
       | Bard still has a ways to go to catch up.
        
         | tyfon wrote:
         | I've found that GPT4 does very well for problems that have
         | examples on the web, but I have yet to make it display a 3d
         | cube using opengl (any version) in linux using c++ or python.
         | 
         | So it depends heavily on the task and how much it has stored
         | from the training.
         | 
         | Now if you ask them to do a teapot, it sometimes do it since
         | there are so many examples of that, even if you often have to
         | tinker a bit to get it to compile and run, at least it is a
         | teapot and not some weird random geometrical object.
         | 
         | I have yet to try it with bard, will test when I get home to a
         | proper computer.
        
         | amf12 wrote:
         | > I just gave Bard a try. It's still nowhere near as good at
         | coding as chatGPT (and GPT4 specifically)
         | 
         | Because its a much smaller model IMO. Its like comparing a car
         | with 2.0 engine, with a car with 3.0 engine. Also ChatGPT made
         | a lot of errors in code just a few months ago. Google seems to
         | be catching up.
        
           | verdverm wrote:
           | Worth noting that Google i/o is next month and many are
           | expecting announcements related to Bard. There is that rumor
           | floating around that they are training a very large (30T)
           | model
        
             | thewataccount wrote:
             | The only source I can find is a reddit post from April
             | fool's day, do you have a better one by chance?
             | 
             | There's almost no way they make a 30T model that's useful
             | at least? That's over 60TB of memory at 16bit, 15TB at 4
             | bit. That's 15TB that have to go through either VRAM or TPU
             | Memory (TPUv4 is 32GB I think?). Yes you could distribute
             | it and chunk it, but that would be insanely slow/expensive
             | to run for anything more then a research demo.
             | 
             | GPT-4 is most likely 1~2T from what I've heard, and they've
             | been struggling to run that profitably/quickly judging by
             | the crazy pricing and 25messages/3hours if you pay for
             | premium chatgpt.
             | 
             | EDIT: The only way this is possible is if they've pulled
             | off some crazy optimization, which is definitely possible,
             | but a lot of the research from them such as their own
             | Chinchilla scaling suggest smaller models that are more
             | fully trained are better.
             | 
             | According to their chinchilla paper they would need over
             | 330TB of training data for that, and an absurd amount of
             | training time.
        
               | verdverm wrote:
               | See my other comment:
               | https://news.ycombinator.com/item?id=35657567
        
         | politelemon wrote:
         | I've been using it extensively and never once found ChatGPT to
         | produce working code when the ask is just above basic. It might
         | have to do with subject matter. It usually is close enough but
         | my experience has been what you've found with Bard - missing
         | arguments, phantom methods, etc.
         | 
         | Haven't bothered with Bard at all.
        
           | UncleOxidant wrote:
           | did you try with GPT4? I was able to run one GPT4 query/day
           | for free on poe.com
           | 
           | Yeah, given the random nature of these LLMs it's possible
           | that I won the lottery in that particular case.
        
             | fhd2 wrote:
             | In my experience, GPT-4 does get lots of textbook/common
             | problems right in common languages on first try. But when I
             | tried to solve a just mildly complex problem (finding a
             | ring of cells in a Godot GridMap using GDScript), it
             | produced nonsense: Solved the problem wrong, called
             | functions that don't exist and so on. It did correct some
             | of the mistakes after I pointed them out, but I couldn't
             | get something working without manual editing. Then I asked
             | it to optimise the solution and got nonsense again. For
             | simple data wrangling in Python it worked a lot better.
             | 
             | I have a suspicion there might be some... showmanship at
             | work where all kinds of test problems and common tasks are
             | made sure to work well.
             | 
             | Also it generates code slower than I would type (if I know
             | what to write), which makes me mad for some reason.
        
               | fnordpiglet wrote:
               | I think it has a lot to do with how common code examples
               | in the language are. I bet if you tried with languages
               | detailed here you will get a lot better results than more
               | obscure languages (like GDscript)
               | 
               | https://octoverse.github.com/2022/top-programming-
               | languages
        
             | busseio wrote:
             | [dead]
        
       | bg24 wrote:
       | Tried it and love it. This is still not at par with ChatGPT 3.5.
       | I use ChatGPT all the time, and will increase use of Bard from
       | now.
       | 
       | Good that Bard is aware of latest version of software (eg.
       | NextJS). Helps when APIs of a software evolves quickly.
       | 
       | I use a lot of Google products and really hope that they manage
       | to nail it as at least #2 player in the space. As a company, they
       | have not proven to come back and take the lead from behind.
        
         | r3trohack3r wrote:
         | GPT-4 has a lot of problems with recent versions of APIs.
         | Working with Cloudflare workers was initially painful because
         | of the dated code it would generate. The code would be valid,
         | it just didn't work with the initial scaffolding wrangler
         | generated.
         | 
         | I've found copying and pasting chunks of the API context into
         | the prompt solves 99% of it. I'm now able to get it to generate
         | modern workers just fine. You can copy an example method
         | signature, some docs, etc. and it picks up on that fairly
         | reliably. I've even just copy-and-pasted an entire worker
         | implementation (100+ LoC) at the beginning of a prompt and that
         | worked.
        
           | bg24 wrote:
           | Would you mind sharing 1 example how you do it? Or any web
           | link?
           | 
           | I have spent a lot of time reading through prompt
           | engineering, but coding examples to adapt to latest versions
           | are rare to find.
        
             | r3trohack3r wrote:
             | I don't bother much with prompt engineering. I just treat
             | ChatGPT as a pair programmer who can't see my editor. I
             | tell it where files are, what I'm using, copy and paste
             | relevant code in, and it just works.
             | 
             | The big "breakthrough" for me was unlearning all of my
             | search engine habits of thinking I need to be precise and
             | talk to it like a machine. I use compliments like "that's
             | great" and it seems to reinforce good answers well enough.
             | 
             | Just treat it like a human on the other side of the screen
             | and adjust as necessary.
             | 
             | Also pay the $20 - GPT-3.5 is not good, GPT-4 is junior
             | engineer good. Most of my time is now spent describing
             | problems to it and code reviewing what it does.
             | 
             | Edit: I do use prompt engineering when I'm building AI
             | enabled products to help keep things on the rails. But for
             | my personal use of ChatGPT I just let 'er rip.
        
             | tethys wrote:
             | They gave an example of this in the launch live stream,
             | about 12 minutes in:
             | https://www.youtube.com/live/outcGtbnMuQ
        
           | shmoogy wrote:
           | Have you tried phind.com ? It scrapes sites so it can get
           | additional context
        
         | codemac wrote:
         | I've found that the bugs that GPT-4 and GPT-3.5 make are
         | actually... somehow more nefarious because it understands
         | enough to make very complex bugs.
         | 
         | The complexity of the bugs you get are usually 10x as complex
         | as the code's main logic you wrote. So you need to write very
         | simple code.
         | 
         | Unfortunately, these GPT models are very complex. The bugs they
         | produce will take an order of magnitude better model to fix.
        
       | javier_e06 wrote:
       | I wouldn't touch anything that Google(The eyes and ears of
       | advertisers) offers, with a 10 foot pole. Specially if is work
       | related. Chat GPT software development answers get better with
       | warm up questions. I don't shoot questions point blank but do a
       | 3-4 related questions first.
        
         | neel8986 wrote:
         | You do realize that ChatGPT is owned partially by another ad
         | gint Microsoft who is also hell bent on showing ads in start
         | menu
        
           | javier_e06 wrote:
           | Yeah, I wouldn't touch MS Windows or Bing either. Is Duckduck
           | Go and Linux for me. If ChatGPT starts charging for basic
           | service, I might pay.
        
       | sidcool wrote:
       | It's not available in my country yet.
        
       | eigenvalue wrote:
       | I don't really see why anyone would use this unless they've been
       | in a coma for the past few months and never heard of ChatGPT.
        
         | r0m4n0 wrote:
         | Why does competition exist ever? I think there are trade offs
         | for everyone to consider before they blindly assume one will be
         | the better option for every use case forever
        
         | jeffbee wrote:
         | I try to glance at both, but because I do that I know that
         | ChatGPT has severe availability problems, whereas Bard so far
         | has not failed for me. And Bard is much faster even though
         | superficially it can seem slower because ChatGPT blurts out the
         | first word more quickly.
        
         | danpalmer wrote:
         | Well right now it's just another website you have to go to with
         | intent to use it like ChatGPT, but if Google built it into
         | search it would get a ton of organic use from people doing
         | things like googling error messages.
         | 
         | I'm not sure if this is what is planned of course, but that
         | would be far more widely used than an otherwise siloed website
         | you have to visit specifically.
         | 
         | Also remember that most people have at least never used
         | ChatGPT, and most probably still haven't heard of it. We live
         | in a tech bubble - none of my family had heard of it a few
         | weeks ago. They all use Google search on a daily basis.
        
         | rexbee wrote:
         | Bard seems to use current data whereas Chatgpt stops at 2021.
         | Bard is also significantly faster at returning responses.
        
           | godzillabrennus wrote:
           | Being trained on more up to date data and being faster are
           | irrelevant when returning responses that aren't as useful.
           | Google has catching up to do.
        
         | quadrature wrote:
         | It's good to have more competition in the space. An AI space
         | race between Google and OpenAI could be really interesting.
        
           | ShamelessC wrote:
           | Competitions have winners and losers.
        
         | benatkin wrote:
         | Think of ChatGPT as Breaking Bad and Bard as Better Call Saul.
        
       | mrits wrote:
       | I've asked bard some coding questions around winapi that I've
       | asked ChatGPT the last few weeks. So far not only does it give
       | incorrect answers, it seems super confident in telling me the
       | wrong thing.
       | 
       | E.g, does SendInput send to a background window? It told me not
       | only does it but it is the only way to do it and then gave me
       | examples where it got a handle to a window but then didn't even
       | use it.
       | 
       | A few months ago this would be super impressive but after using
       | ChatGPT there is no reason that I've found why I'd want to use an
       | inferior AI.
        
         | kkkkkkk wrote:
         | FWIW your example seems to work for me.
         | 
         | https://i.imgur.com/1sWc2WR.png
        
           | mrits wrote:
           | https://imgur.com/a/qmjVPJ5
           | 
           | Same question with a different answer.
           | 
           | This is interesting. This version of the question is better
           | than the original answer I got but still not as good as your
           | answer.
        
       | impulser_ wrote:
       | It's pretty cool that it will cite open source if it quotes code
       | from an open source project, unlike ChatGPT and Copilot.
        
         | andai wrote:
         | To comply with GPL, it would have to automatically re-license
         | your project as GPL too, right?
        
           | vineyardmike wrote:
           | If _you_ use GPL code, _you_ may have to relicense _your_
           | project.
           | 
           |  _Bard_ can share with you any code that it is licensed to
           | distribute (which includes GPL).
        
         | procinct wrote:
         | But copilot actually outputs code into your project, do you
         | really want citations there? I prefer to just enable the
         | setting for it to not reproduce code verbatim from other
         | projects.
        
       | [deleted]
        
       | weakwire wrote:
       | It's really fast! That is very important. Also it provides 3
       | variation drafts. I see this as a winner
        
         | cush wrote:
         | The Copilot VSCode extension does 10 drafts
        
           | londons_explore wrote:
           | Both can be made to give more drafts by just asking the same
           | question again...
           | 
           | I don't see more drafts as a real differentiator.
        
             | weakwire wrote:
             | it spits out 3 drafts in 1 go. without the need to wait. It
             | is simply way faster than GPT4
        
               | verdverm wrote:
               | I wonder how much of that has to do with TPUv4 vs the
               | hardware used for GPT4?
               | 
               | Google has invested in custom AI hardware for some time
               | now and does not run their workloads on nvidia cards
        
               | londons_explore wrote:
               | Neural networks are really parallelizable. If I scale up
               | my AI service to handle double the number of users by
               | buying double the number of GPU's, it is theoretically
               | possible to _also_ serve each user in half the time.
               | 
               | To do so, you need to split the matrix multiplies across
               | the new machines. You also need more inter-machine
               | network bandwidth, but with GPT-3 that works out to 48
               | kilobytes per token predicted collected from every
               | processing node and given to every processing node. Even
               | if Bard is 100x as big, that is still very doable within
               | datacenter scale networking.
               | 
               | However, OpenAI doesn't seem to have done this - I
               | suspect an individual request is simply routed to one of
               | n machine clusters. As they scale up, they are just
               | increasing n, which doesn't give any latency benefit for
               | individual requests.
        
               | verdverm wrote:
               | Yup, the TPUv4 pod is highly optimized
               | 
               | They are claiming to be the first to achieve >50%
               | saturation during training. Pretty sure I recall
               | Midjourney is using TPUv4 pods too
               | 
               | https://cloud.google.com/blog/products/ai-machine-
               | learning/g...
               | 
               | https://cloud.google.com/tpu/docs/system-architecture-
               | tpu-vm
        
             | lavasalesman wrote:
             | UX is a feature
        
       | ______ wrote:
       | Doesn't work for this simple prompt -- maybe it's not fully
       | rolled out yet?
       | 
       | > Can you write a <HTML> file that POSTs a form with firstName,
       | lastName to some other URL?
       | 
       | > I'm a language model and don't have the capacity to help with
       | that.
        
         | gmm1990 wrote:
         | Reworded the prompt and it seemed to work, I entered yours and
         | it didn't work but the below did: how would you send a post
         | html request with firstName, lastName as inputs to a url
         | 
         | To send a POST HTML request with firstName and lastName as
         | inputs to a URL, you can use the following steps:
         | 
         | 1. Create an HTML form with two input fields, one for firstName
         | and one for lastName. 2. Set the method of the form to "POST".
         | 3. Set the action of the form to the URL you want to send the
         | request to. 4. Submit the form.
         | 
         | The following is an example of an HTML form that you can use to
         | send a POST request with firstName and lastName as inputs:
         | 
         | ```html <form method="POST"
         | action="https://example.com/api/v1/users"> <input type="text"
         | name="firstName" /> <input type="text" name="lastName" />
         | <input type="submit" value="Submit" /> </form> ```
         | 
         | When you submit this form, the server will receive the
         | firstName and lastName values as POST request parameters. You
         | can then use these values to perform whatever actions you need
         | to do on the server.
        
         | oh_sigh wrote:
         | It's probably just confused that you're asking it to "write a
         | file" and not tell you the contents of what the file should be
        
       ___________________________________________________________________
       (page generated 2023-04-21 23:02 UTC)