[HN Gopher] Gorilla-CLI: LLMs for CLI including K8s/AWS/GCP/Azur...
___________________________________________________________________
Gorilla-CLI: LLMs for CLI including K8s/AWS/GCP/Azure/sed and 1500
APIs
Author : shishirpatil
Score : 120 points
Date : 2023-06-29 17:52 UTC (5 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| [deleted]
| none2022 wrote:
| newbie question.
|
| Not to make this a debug thread but this is what I get when I try
| out gorilla
|
| > gorilla I want to find my ip address
|
| /home/username/.local/lib/python3.10/site-
| packages/requests/__init__.py:102: RequestsDependencyWarning:
| urllib3 (1.26.7) or chardet (5.1.0)/charset_normalizer (2.0.7)
| doesn't match a supported version! warnings.warn("urllib3 ({}) or
| chardet ({})/charset_normalizer ({}) doesn't match a supported "
| Traceback (most recent call last): File
| "/home/username/.local/bin/gorilla", line 8, in <module>
| sys.exit(main()) File "/home/username/.local/lib/python3.10/site-
| packages/go_cli.py", line 128, in main user_id = get_user_id()
| File "/home/username/.local/lib/python3.10/site-
| packages/go_cli.py", line 76, in get_user_id assert user_id != ""
| AssertionError
| MichaelStubbs wrote:
| If you do "pip list --outdated" then it should show you which
| packages you have installed and that are out of date. Look out
| specifically for the packages that are mentioned in this error
| message: requests, urllib3, chardet, charset_normalizer.
|
| You can then upgrade them by doing "pip install [package name
| here] --upgrade".
| moffkalast wrote:
| Wait so it crashes or does it genuinely generate that as the
| answer?
| cosmojg wrote:
| I recommend shell-gpt[1] for anyone with access to the OpenAI
| API. It works surprisingly well considering how simple it is. Be
| sure to browse the examples in the README.
|
| [1] https://github.com/TheR1D/shell_gpt
| ok123456 wrote:
| It's not a local model. It queries some endpoint on someone
| else's computer.
| ororroro wrote:
| https://huggingface.co/gorilla-llm
| f0e4c2f7 wrote:
| Does it prompt for an API key?
| shishirpatil wrote:
| Nope. No API key needed since we mostly serve our own Gorilla
| models.
| toomuchtodo wrote:
| Not local _yet_. Considering the LLM /generative AI velocity
| we've seen, it's only a matter of time. It's helpful to see
| what others build, providing signal it _can_ be built.
|
| If you're not comfortable using it in your workflow, consider
| it a peek at what's to come. Very exciting times. And it's
| _open source_.
| bfung wrote:
| Yep, some google cloud server: SERVER_URL =
| "http://34.135.112.197:8000"
| bugglebeetle wrote:
| It's completely wild to me anyone would ever install and run
| this from a shell.
| shishirpatil wrote:
| Yes indeed. The models are too computationally expensive to run
| locally (7.5Billion parameters). Though you could in-principle
| swap in any local model.
| kajecounterhack wrote:
| Do y'all have plans to release the model for those who have
| 16gb graphics cards? (I'm assuming the model is fp16?)
| iandanforth wrote:
| I use an alternative that just directly calls OpenAI using my API
| key. I have it mapped to the command `ai` and it works really
| really well. So far I've found no need for any intermediary or
| fancy features. It just shows the command with a (y/[N]) prompt
| and I can choose to run it or not.
|
| I use the first tagged version of `aicmd` before it was given an
| unneeded intermediary:
| https://github.com/atinylittleshell/aicmd/tree/v1.0.2
| petercooper wrote:
| Can't speak for this tool yet, but ChatGPT has been great for
| this use case. Sometimes a tool doesn't have a man page, has hard
| to navigate docs, or whatever, "What does $flag mean in $tool"
| tends to work (when it doesn't hallucinate something totally
| wrong).
|
| One recent example: "what does -w do on curl" .. not a single top
| 10 result on Google mentions it in the context, but GPT3.5 nails
| it in two seconds complete with a working example. I _know_ "-w"
| is interpreted by Google in a special way, but frankly given the
| obvious context I shouldn't _need_ to know how Google works.
| (Through experience I also know
| https://explainshell.com/explain?cmd=curl+-w will do a good job,
| but ChatGPT actually provides a working example which is even
| better.)
|
| That said, I do think you need a good critical eye to use LLMs in
| this context. It's like relying on a calculator. You still need
| mental math skills to know that 91 * 10 _can 't_ equal 2511.
| Similarly, when GPT starts hallucinating, it helps if you have a
| high sensitivity to smelling it out.
| nforgerit wrote:
| > It's like relying on a calculator. You still need mental math
| skills to know that 91 * 10 can't equal 2511. Similarly, when
| GPT starts hallucinating, it helps if you have a high
| sensitivity to smelling it out.
|
| Well, at least my calculators don't have this error rate GPT4
| still does. Especially for seemingly simple things like a
| command flag, I have zero trust if GPT doesn't give me
| something that will eventually erase all my data.
| hhh wrote:
| how often are you using LLMs?
| nforgerit wrote:
| Somehow I knew that this question would come up,
| questioning the "progress" makes me a heretic.
|
| So last 2-3 months I subscribed to ChatGPT4 (and much
| longer to Copilot), worked through most of the HN threads
| on tips and reviews, posts I could find on "prompt
| engineering" and have hundreds of sessions with ChatGPT4.
| So, I still might have missed something, but I think I have
| a rather good idea of what's going on.
|
| 1. It's rather good with understanding what I want. I can
| dump pretty much anything into it and give it certain rules
| (things we described years ago as "Google fu" until Google
| SERP became useless) and it will make something out of it.
|
| 2. It's a nice rubberduck to discuss things and get a broad
| overview on certain topics.
|
| 3. It's amazingly stupid, even if I ask it for its
| confidence, on the validity of its answers. It's like
| talking to a 8-year-old know-it-all: You have to fact check
| everything. If I confront it with the error, it even reacts
| like a 8-year old.
|
| 4. Initial responses for intentionally broad topics (summed
| up with "give me ansible yaml to deploy wireguard to N
| servers") are often times not working at all and after an
| hour of query-response you're better off reading ansible
| docs.
|
| 5. Initial responses for intentionally special topics
| (summed up with "what's the fastest algorithm to sort this
| given x, y, z and bla will never be A") it frequently comes
| up with good, sometimes surprisingly creative solutions.
|
| All in all: Why oh why would I trade in correctness with a
| significant error rate ("hallucination" is a word from SV
| marketing hell) and debugging bullshit answers. Since
| debugging things is already a big drag in programming, I
| need things I can trust to build more things on top of
| them. If I can't trust 100% the "command" an LLM is
| generating, I'll never directly let it execute its code.
| [deleted]
| thinkmassive wrote:
| I think spreadsheet is a slightly better analogy than
| calculator. The latter has well defined capabilities and
| essentially 100% accuracy within those bounds. A spreadsheet
| with a minor typo in one field can produce drastically
| incorrect results that appear fine to the untrained eye.
| fangchenl wrote:
| [flagged]
| shishirpatil wrote:
| Hey HN! As one of the contributors and author of Gorilla, we want
| to express gratitude for your valuable feedback. The community's
| desire for a straightforward method to invoke Gorilla led to the
| development of this CLI tool. We appreciate your continued input,
| so please keep those suggestions coming!
| bugglebeetle wrote:
| Is this _actually_ developed by UC Berkeley or just a project by
| one of their PhD students?
| behnamoh wrote:
| Most likely an underpaid PhD student.
| valbaca wrote:
| by https://shishirpatil.github.io/
| shengs wrote:
| [flagged]
| ofermend wrote:
| very cool. But like many other uses of LLM it can hallucinate
| and/or produce a wrong result. For example I tried:
|
| "gorilla dry run of brew upgrade"
|
| And got a response that didn't work.
| shishirpatil wrote:
| Thanks @ofermend, we believe that Gorilla will hallucinate
| lesser than other models but it's not zero yet! We will
| continue to reduce hallucination. Thanks for the feedback!
| scrps wrote:
| Waiting for the LLM version of ye olden IRC hazing/trolling of:
| "oh you can fix that with rm -rf /"
|
| Edit: Typo
| yewenjie wrote:
| How does this compare with github-copilot-cli?
| tianjunz wrote:
| Hi HN, I'm one of the authors from Gorilla Project. Gorilla now
| presents in an CLI interface and you can interact with your
| laptop in English! Feedbacks and suggestions are very welcome!!
| razzypitaz wrote:
| A flag for printing the chosen command to stdout instead of
| executing them in a subprocess would be helpful.
|
| Also I am finding in my environment that longer results don't
| line wrap and so it's hard to tell what the actual full command
| is, but that might be just me.
| shishirpatil wrote:
| Oo good suggestion @razzypitaz. We'll try to incorporate this
| in the next release :) BTW it's open sourced, so if you would
| be interested in raising a PR would love to have you as a
| contributor!
| kleiba wrote:
| Geez, I don't understand most of the words in that headline...
| behnamoh wrote:
| [flagged]
| reeaper wrote:
| [flagged]
| dievskiy wrote:
| I would encourage authors to update README with more
| representative examples.
| linuxdude314 wrote:
| It's very sketchy that they use stderr and queries for training.
|
| Don't pass anything sensitive into this program!
| shishirpatil wrote:
| Hey @linuxdude314 thank you for the comment. As we mentioned
| commands are executed solely with your explicit approval; And
| while we utilize queries and error logs (stderr) for model
| enhancement, we NEVER collect output data (stdout). This is a
| stronger guarantee than many of the other LLMs out there and
| our goal is to help this inform our research.
|
| One of the reasons we open-sourced the front-end, is that if
| you would like to keep everything private, you can just clone
| the repo, comment out the logging, install it, and we will
| still honor and serve your queries if you hit our hosted end-
| point :) Let us know if there is anything more that we can do
| to make you comfortable in using our tool!
| 66an wrote:
| [flagged]
| rombr wrote:
| Loving it! Works nicely for k8s: (base) ~ g get
| the image ids of all pods running in all namespaces in kubernetes
| kubectl get pods --all-namespaces -o jsonpath="{..imageID}"
| sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39e
| c0bac0 sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2
| f02ecb54ead39ec0bac0
| krayush1 wrote:
| Amazing work. I always wished for something like this so that I
| don't have to remember commands or Google everytime I need
| something. Great for productivity.
| MeteorMarc wrote:
| It runs in the terminal, so you can also CTRL-r it!
| behnamoh wrote:
| Doesn't this violate the Gorilla glue's trademark?
| jhbadger wrote:
| Why? There are lots of other products named Gorilla. Gorilla
| Glass (breakage resistant glass for phones) and Gorilla Wear
| (clothing). Trademarks are only relevant in a particular field.
| Comuputer utilities aren't adhesives, glass, or clothing.
| RamblingCTO wrote:
| We've gone full circle: efficient meta languages back to
| inefficient and ambivalent natural language.
| lemming wrote:
| I'm endlessly amused by the fact that among the first
| applications of LLMs were tools to summarise emails,
| accompanied by tools to write your emails based on a short
| description of what you want to say. So soon we'll effectively
| be communicating by text message, with the LLMs acting as a
| sort of anti-compression in between.
|
| My brother lives in Japan, and he recently had to write a lot
| of emails to the company renovating his apartment. He said that
| ChatGPT was a lifesaver there since at least 75% of semi-formal
| (i.e. between customer and company) Japanese emails is
| formality and filler. He just skipped all of that and ChatGPT
| wrote it for him.
| linuxdude314 wrote:
| Agreed; this is somewhat useful for beginners but incredibly
| silly for professionals.
| kunalgupta wrote:
| goodbye Fig!
___________________________________________________________________
(page generated 2023-06-29 23:01 UTC)