[HN Gopher] Open-interpreter: A natural language interface for c...
___________________________________________________________________
Open-interpreter: A natural language interface for computers
Author : 2-3-7-43-1807
Score : 48 points
Date : 2024-11-18 11:06 UTC (6 days ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| cxr wrote:
| It's funny that we're getting so much attention funneled towards
| the thought-to-machine I/O problem now that LLMs are on the
| scene.
|
| If the improvements are beneficial now, then surely they were
| beneficial before.
|
| Prior to LLMs, though, we could have been making judicious use of
| simple algorithmic approaches to process natural language
| constructs as command language. We didn't see a lot of interest
| in it.
| samtheprogram wrote:
| Uh, we did...? Alexa, Siri, Ok Google...
|
| A lot of money was poured into that goal, but because every
| type of action required a handcrafted integration, they were
| either costly to develop or extremely limited. That's no longer
| the case.
| smlacy wrote:
| I find the "Can you ..." phrasing used in this demo/project
| fascinating. I would have expected the LLM to basically say "Yes
| I can, would you like me to do it?" to most of these questions,
| rather than directly and immediately executing the action.
| iamjackg wrote:
| I'm very curious why you think that! Sincerely. These models
| undergo significant human-aided training where people express a
| preference for certain behaviours, and that is fed back into
| the training process: I feel like the behaviour you mention
| would probably be trained out pretty quickly since most people
| would find it unhelpful, but I'm really just guessing.
| jasonjmcghee wrote:
| If an employer were to ask an employee, "can you write up this
| report and send it to me" and they said, "yes I can, would you
| like me to do it?", I think it would be received poorly. I
| believe this is a close approximation of the relationship
| people tend to have with chatgpt.
| simonw wrote:
| I finally got around to trying this out right now. Here's how to
| run it using uvx (so you don't need to install anything first):
| uvx --from open-interpreter interpreter
|
| I took the simplest route and pasted in an OpenAI API key, then I
| typed: find largest files on my desktop
|
| It generated a couple of chunks of Python, asked my permission to
| run them, ran them and gave me a good answer.
|
| Here's the transcript:
| https://gist.github.com/simonw/f78a2ebd2e06b821192ec91963995...
| swyx wrote:
| simon's writeup is here
| https://simonwillison.net/2024/Nov/24/open-interpreter/
|
| i always thought the potential for openinterpreter would be
| kind of like an "open source chatgpt desktop assistant" app
| with swappable llms. especially vision since that (specifically
| the one teased at 4o's launch
| https://www.youtube.com/watch?v=yJHw33cVeHo) has not yet been
| released by oai. they made some headway with the "o1" device
| that they teased.. and then canceled.
|
| instead all the demo usecases seem very trivial: "Plot AAPL and
| META's normalized stock prices". "Add subtitles to all videos
| in /videos" seems a bit more interesting but honestly trying to
| hack it in a "code interpreter" inline in a terminal is
| strictly worse than just opening up cursor for me.
|
| i'd be interested if anyone here is active users of OI and what
| you use it for.
___________________________________________________________________
(page generated 2024-11-24 23:01 UTC)