[HN Gopher] Show HN: My LLM CLI tool can run tools now, from Pyt...
___________________________________________________________________
Show HN: My LLM CLI tool can run tools now, from Python code or
plugins
Author : simonw
Score : 98 points
Date : 2025-05-27 20:53 UTC (2 hours ago)
(HTM) web link (simonwillison.net)
(TXT) w3m dump (simonwillison.net)
| behnamoh wrote:
| what are the use cases for llm, the CLI tool? I keep finding tgpt
| or the bulletin AI features of iTerm2 sufficient for quick shell
| scripting. does llm have any special features that others don't?
| am I missing something?
| simonw wrote:
| I find it extremely useful as a research tool. It can talk to
| probably over 100 models at this point, providing a single
| interface to all of them and logging full details of prompts
| and responses to its SQLite database. This makes it fantastic
| for recording experiments with different models over time.
|
| The ability to pipe files and other program outputs _into_ an
| LLM is wildly useful. A few examples: llm -f
| code.py -s 'Add type hints' > code_typed.py git diff |
| llm -s 'write a commit message' llm -f
| https://raw.githubusercontent.com/BenjaminAster/CSS-
| Minecraft/refs/heads/main/main.css \ -s 'explain all
| the tricks used by this CSS'
|
| It can process images too!
| https://simonwillison.net/2024/Oct/29/llm-multi-modal/
| llm 'describe this photo' -a path/to/photo.jpg
|
| LLM plugins can be a lot of fun. One of my favorites is llm-cmd
| which adds the ability to do things like this:
| llm install llm-cmd llm cmd ffmpeg convert video.mov to
| mp4
|
| It proposes a command to run, you hit enter to run it. I use it
| for ffmpeg and similar tools all the time now.
| https://simonwillison.net/2024/Mar/26/llm-cmd/
|
| I'm getting a whole lot of coding done with LLM now too. Here's
| how I wrote one of my recent plugins: llm -m
| openai/o3 \ -f
| https://raw.githubusercontent.com/simonw/llm-hacker-
| news/refs/heads/main/llm_hacker_news.py \ -f https://ra
| w.githubusercontent.com/simonw/tools/refs/heads/main/github-
| issue-to-markdown.html \ -s 'Write a new fragments
| plugin in Python that registers issue:org/repo/123 which
| fetches that issue number from the specified github
| repo and uses the same markdown logic as the HTML page to turn
| that into a fragment'
|
| I wrote about that one here:
| https://simonwillison.net/2025/Apr/20/llm-fragments-github/
|
| LLM was also used recently in that "How I used o3 to find
| CVE-2025-37899, a remote zeroday vulnerability in the Linux
| kernel's SMB implementation" story - to help automate running
| 100s of prompts: https://sean.heelan.io/2025/05/22/how-i-
| used-o3-to-find-cve-...
| th0ma5 wrote:
| "LLM was used to find" is not what they did
|
| > had I used o3 to find and fix the original vulnerability I
| would have, in theory [...]
|
| they ran a scenario that they thought could have lead to
| finding it, which is pretty much not what you said. We don't
| know how much their foreshadowing crept into their LLM
| context, and even the article says it was also sort of
| chance. Please be more precise and don't give into these
| false beliefs of productivity. Not yet at least.
| simonw wrote:
| I said "LLM was also used recently in that..." which is
| entirely true. They used my LLM CLI tool as part of the
| work they described in that post.
| setheron wrote:
| Wow what a great overview; is there a big doc to see all
| these options? I'd love to try it -- I've been trying `gh`
| copilot pulgin but this looks more appealing.
| simonw wrote:
| I really need to put together a better tutorial - there's a
| TON of documentation but it's scattered across a bunch of
| different places:
|
| - The official docs: https://llm.datasette.io/
|
| - The workshop I gave at PyCon a few weeks ago:
| https://building-with-llms-pycon-2025.readthedocs.io/
|
| - The "New releases of LLM" series on my blog:
| https://simonwillison.net/series/llm-releases/
|
| - My "llm" tag, which has 195 posts now!
| https://simonwillison.net/tags/llm/
| setheron wrote:
| I use NixOS seems like this got me enough to get started
| (I wanted Gemini)
|
| ``` # AI cli (unstable.python3.withPackages ( ps: with
| ps; [ llm llm-gemini llm-cmd ] )) ```
|
| looks like most of the plugins are models and most of the
| functionality you demo'd in the parent comment is baked
| into the tool itself.
|
| Yea a live document might be cool -- part of the
| interesting bit was seeing "real" type of use cases you
| use it for .
|
| Anyways will give it a spin.
| furyofantares wrote:
| I don't use llm, but I have my own "think" tool (with MUCH less
| support than llm, it just calls openai + some special prompt I
| have set) and what I use it for is when I need to call an llm
| from a script.
|
| Most recently I wanted a script that could produce word lists
| from a dictionary of 180k words given a query, like "is this an
| animal?" The script breaks the dictionary up into chunks of
| size N (asking "which of these words is an animal? respond with
| just the list of words that match, or NONE if none, and nothing
| else"), makes M parallel "think" queries, and aggregates the
| results in an output text file.
|
| I had Claude Code do it, and even though I'm _already_ talking
| to an LLM, it's not a task that I trust an LLM to do without
| breaking the word list up into much smaller chunks and making
| loads of requests.
| behnamoh wrote:
| unrelated note: your blog is nice and I've been following you for
| a while, but as a quick suggestion: could you make the code
| blocks (inline or not) highlighted and more visible?
| simonw wrote:
| I have syntax highlighting for blocks of Python code - e.g.
| this one https://simonwillison.net/2025/May/27/llm-
| tools/#tools-in-th... - is that not visible enough?
|
| This post has an unusually large number of code blocks without
| syntax highlighting since they're copy-pasted outputs from the
| debug tool which isn't in any formal syntax.
| th0ma5 wrote:
| [flagged]
| dang wrote:
| Comments like this break both the HN guidelines
| (https://news.ycombinator.com/newsguidelines.html) and the Show
| HN guidelines (https://news.ycombinator.com/showhn.html).
|
| Can you please review those and please not post like this?
| th0ma5 wrote:
| I don't know the context that could be cordial when there is
| so much outright dishonesty about the state of LLM uptake.
| Can we have some guidance about how to call out bullshit? I
| know that the fake it to you make it or sell the shovels to
| the gold rush people are staples on here, but I don't what to
| do with a technology that is purely for that and seemingly
| for nothing else. Why do all of the breathless pro-LLM posts
| get pushed to the top of every LLM story, and you have to go
| to the second comment aways to see the avalanche of people
| calling bullshit? Thank you for your time moderating, it
| would be helpful to understand the guidance in face of the
| brigrading on here.
| pvg wrote:
| _Can we have some guidance about how to call out bullshit?_
|
| That's right in the site docs linked above which you should
| check out.
| th0ma5 wrote:
| I also added another reply to a false assertion by the OP
| here in this thread. Is that better ? It takes a long time to
| research all these falsehoods that are leading to the hype.
| simonw wrote:
| "don't actually do anything with this stuff" -
| https://github.com/simonw/llm/compare/0.25...0.26 138 commits
| to implement this new feature.
| th0ma5 wrote:
| I'm sorry I missed the part where you're delivering products
| for teams or organizations! Where is that?
| Uehreka wrote:
| That's what his llm CLI is? I've been waiting for this
| release so I can take my existing notes on best practices
| coding with LLMs (which I've been doing for both work
| projects and side projects) and try some experiments with
| rolling a coding agent myself instead of using Claude Code
| or VS Code's Agent mode. If it works well then other folks
| on my team might switch to it too.
|
| I don't get where you get the idea that people aren't
| actually using this stuff and being productive.
| davely wrote:
| I work for a tech company you've definitely heard of.
|
| I use the "llm" tool every single day. You may not know it,
| and that's okay, but Simon's tools are an immense help to
| tons of developers using LLMs.
|
| I know it's fun and trendy to hate on LLMs, but if you're
| not productive with them at this point, you're either:
|
| 1. Working on a novel problem or in some obscure language,
| or
|
| 2. Have a skill issue related to how to utilize LLMs.
| oliviergg wrote:
| Thank you for this release. I believe your library is a key
| component to unlocking the potential of LLMs without the
| limitations/restricitions of existing clients.
|
| Since you released version 0.26 alpha, I've been trying to create
| a plugin to interact with a some MCP server, but it's a bit too
| challenging for me. So far, I've managed to connect and
| dynamically retrieve and use tools, but I'm not yet able to pass
| parameters.
| simonw wrote:
| Yeah I had a bit of an experiment with MCP this morning, to see
| if I could get a quick plugin demo out for it. It's a bit
| tricky! The official mcp Python library really wants you to run
| asyncio and connect to the server and introspect the available
| tools.
| sorenjan wrote:
| Every time I update llm I have to reinstall all plugins, like
| gemini and ollama. My Gemini key is still saved, as are my
| aliases for my ollama models, so I don't get why the installed
| plugins are lost.
| simonw wrote:
| Sorry about that! Presumably you're updating via Homebrew? That
| blows away your virtual environment, hence why the plugins all
| go missing.
|
| I have an idea to fix that by writing a 'plugins.txt' file
| somewhere with all of your installed plugins and then re-
| installing any that go missing - issue for that is here:
| https://github.com/simonw/llm/issues/575
| sorenjan wrote:
| No, I'm using uv tool just like in that issue. I'll keep an
| eye on it, at least I know it's not just me.
| tionis wrote:
| I'm also using uv tools and fixed it by doing something
| like this to upgrade:
|
| uv tool install llm --upgrade --upgrade --with llm-
| openrouter --with llm-cmd ...
| dr_kretyn wrote:
| Maybe a small plug of own similar library: terminal-agent
| (https://github.com/laszukdawid/terminal-agent) which also
| supports tools and even MCP. There's a limited agentic capability
| but needs some polishing. Only once I made some progress on own
| app I learned about this `llm` CLI. Though that one more won't
| harm.
| swyx wrote:
| nice one simon - i'm guessing this is mildly related to your
| observation that everyone is converging on the same set of tools?
| https://x.com/simonw/status/1927378768873550310
| simonw wrote:
| Actually a total coincidence! I have been trying to ship this
| for weeks.
| ttul wrote:
| GPT-4.1 is a capable model, especially for structured outputs and
| tool calling. I've been using LLMs for my day to day grunt work
| for two years now and this is my goto as a great combination of
| cheap and capable.
| simonw wrote:
| I'm honestly really impressed with GPT-4.1 mini. It is my
| default from messing around by their API because it is
| unbelievably inexpensive and genuinely capable at most of the
| things I throw at it.
|
| I'll switch to o4-mini when I'm writing code, but otherwise
| 4.1-mini usually does a great job.
| tantalor wrote:
| This greatly opens up the risk of footguns.
|
| The doc [1] warns about prompt injection, but I think a more
| likely scenario is self-inflicted harm. For instance, you give a
| tool access to your brokerage account to automate trading. Even
| without prompt injection, there's nothing preventing the bot from
| making stupid trades.
|
| [1] https://llm.datasette.io/en/stable/tools.html
| abc-1 wrote:
| Is this satire? Don't give your LLM access to your trading
| account or bank account.
___________________________________________________________________
(page generated 2025-05-27 23:00 UTC)