[HN Gopher] Show HN: Adding Mistral Codestral and GPT-4o to Jupy...
___________________________________________________________________
Show HN: Adding Mistral Codestral and GPT-4o to Jupyter Notebooks
Hey HN! We've forked Jupyter Lab and added AI code generation
features that feel native and have all the context about your
notebook. You can see a demo video (2 min) here:
https://www.tella.tv/video/clxt7ei4v00rr09i5gt1laop6/view Try a
hosted version here: https://pretzelai.app Jupyter is by far the
most used Data Science tool. Despite its popularity, it still lacks
good code-generation extensions. The flagship AI extension
_jupyter-ai_ lags far behind in features and UX compared to modern
AI code generation and understanding tools (like
https://www.continue.dev and https://www.cursor.com). Also, GitHub
Copilot _still_ isn't supported in Jupyter, more than 2 years after
its launch. We're solving this with Pretzel. Pretzel is a free and
open-source fork of Jupyter. You can install it locally with "pip
install pretzelai" and launch it with "pretzel lab". We recommend
creating a new python environment if you already have jupyter lab
installed. Our GitHub README has more information:
https://github.com/pretzelai/pretzelai For our first iteration,
we've shipped 3 features: 1. Inline Tab autocomplete: This works
similar to GitHub Copilot. You can choose between Mistral Codestral
or GPT-4o in the settings 2. Cell level code generation: Click Ask
AI or press Cmd+K / Ctrl+K to instruct AI to generate code in the
active Jupyter Cell. We provide relevant context from the current
notebook to the LLM with RAG. You can refer to existing variables
in the notebook using the @variable syntax (for dataframes, it will
pass the column names to the LLM) 3. Sidebar chat: Clicking the
blue Pretzel Icon on the right sidebar opens this chat (Ctrl+Cmd+B
/ Ctrl+Alt+B). This chat always has context of your current cell or
any selected text. Here too, we use RAG to send any relevant
context from the current notebook to the LLM All of these features
work out-of-the-box via our "AI Server" but you have the option of
using your own OpenAI API Key. This can be configured in the
settings (Menu Bar > Settings > Settings Editor > Search for
Pretzel). If you use your own OpenAI API Key but don't have a
Mistral API key, be sure to select OpenAI as the inline code
completion model in the settings. These features are just a start.
We're building a modern version of Jupyter. Our roadmap includes
frictionless, realtime collaboration (think pair-programming,
comments, version history), full-fledged SQL support (both in code
cells and as a standalone SQL IDE), a visual analysis builder, a
VSCode-like coding experience powered by Monaco, and 1-click
dashboard creation and sharing straight from your notebooks. We'd
love for you to try Pretzel and send us any feedback, no matter how
minor (see my bio for contact info, or file a GitHub issue here:
https://github.com/pretzelai/pretzelai/issues)
Author : prasoonds
Score : 179 points
Date : 2024-07-02 14:23 UTC (8 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| ramonverse wrote:
| Ramon here, the other cofounder of Pretzel! Quick update: Based
| on some early feedback, we're already working on adding support
| for local LLMs and Claude Sonnet 3.5. Happy to answer any
| questions!
| westurner wrote:
| braintrust-proxy: https://github.com/braintrustdata/braintrust-
| proxy
|
| LocalAI: https://github.com/mudler/LocalAI
|
| E.g promptfoo and chainforge have multi-LLM workflows.
|
| Promptfoo has a YAML configuration for prompts, providers,:
| https://www.promptfoo.dev/docs/configuration/guide/
|
| What is the system prompt, and how does a system prompt also
| bias an analysis?
|
| /? "system prompt"
| https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
| prasoonds wrote:
| Thank you for the links! We'll take a look.
|
| At the moment, the system prompt is hardcoded in code. I
| don't think it should bias analyses because our goal is
| usually returning code that does something specific (for eg
| "calculate normalized rates for column X grouped by column
| Y") as opposed to something generic ("why is our churn rate
| going up"). So, the idea would be that the operator is
| responsible for asking the right questions - the AI merely
| acts as a facilitator to generate and understand code.
|
| Also tangentially, we do want to allow users some king of
| local prompt management and allow easy access, potentially
| via slash commands. In our testing so far, the existing
| prompt largely works (took us a fair bit of hit-and-trial to
| find something that mostly works for common data usecases)
| but we're hitting limitations of a fixed system prompt so
| these links will be useful.
| marcklingen wrote:
| Big fan of LiteLLM Proxy and LiteLLM Python SDK to connect to
| various local models. Might be helpful here as well
| prasoonds wrote:
| Thanks Marc! We'll check them out. So far, with our limited
| experience with local models, we'd simply been thinking to
| use Ollama so thanks for the heads up!
| lschneider wrote:
| Github Copilot is the most useful tool I've found in a long time
| and having that in Jupyter Notebooks is just awesome. I've been
| missing that for quite some time. Great work guys!
| skybrian wrote:
| You can also open Jupyter notebook files in VS Code, which
| would be another way to get AI autocomplete. I'm not enough of
| a Jupyter user to know whether it would make sense to use VS
| Code all the time.
| prasoonds wrote:
| Yeah, this is definitely a good way to access AI code
| completion (inline or otherwise) in Jupyter notebooks. In
| fact, I know some data folks who've been using Jupyter from
| day 1 switching to VSCode simply because their company buys a
| copilot license for everyone and they really miss it their
| Jupyter workflow.
| prasoonds wrote:
| Agree. We actually tried getting GitHub Copilot to work with
| Jupyter but GH doesn't have an official API. We actually took
| some time to reverse engineer an implementation from neovim GH
| Copilot extension [1] and from Zed [2] but found it too flaky
| and too much trouble in the end.
|
| Meanwhile, we also found a better speed/quality tradeoff with
| Codestral (since it has a fill-in-the-middle version, unlike a
| general LLM) so we decided to go with Codestral. This is
| inspired by continue.dev using Codestral for tab completion :)
|
| [1] https://github.com/github/copilot.vim [2]
| https://zed.dev/blog/copilot
| mritchie712 wrote:
| > GitHub Copilot still isn't supported in Jupyter
|
| What do you mean by this? I've been using Copilot in VS Code
| .ipynb files for over a year now.
| wolftickets wrote:
| I assume via Jupyter Notebook or Lab ( not VS Code running it )
| Bjartr wrote:
| They likely mean Jupyter Notebook the application, not the
| notebook file format.
| prasoonds wrote:
| It's as others say - in VSCode there support for Copilot but
| most Data Scientists and specially analysts who don't spend
| most of their day in a text editor still use Jupyter Lab (or
| Notebook - I mean the software, not the file format) and with
| Codestral, we've found similarly good completions (sometimes
| better than Copilot) but at a much better speed and cost.
| skybrian wrote:
| Are the file formats the same? Are there any Pretzel-specific
| extensions?
| prasoonds wrote:
| We're just a fork of Jupyter so _everything_ - notebook files,
| keybindings, extensions, settings should just work.
|
| We pull all of your config from the ~/.jupyter folder so you
| should be able to switch between Jupyter and Pretzel from
| different python environments (though you might see some
| warnings)
| carreau wrote:
| Curious about the limitations that made you fork it instead of
| making an extension.
| prasoonds wrote:
| So we discuss this briefly on our FAQ but let me try to expand
| on it.
|
| Our goal is to make a modern literate programming tool. On a
| surface level, a tool like that would end up looking very
| similar to Jupyter, though with better features. We've
| mentioned some things we'd like to have in this final tool in
| our README and also in the post above.
|
| Our first thought was to make a tool from scratch. The
| challenge was, it's very hard to get people to switch and so,
| we had to go where people already are - that meant Jupyter.
|
| We could've made this _one feature_ an extension with some
| difficulty (in-fact, our early experiments, we started by
| making an extension). It would have some downsides - we wouldn
| 't have granular control over certain core Jupyter behaviours
| like we do right now (for eg, we wanted to allow creating
| hidden folders to store some files). But we probably could have
| made a 95% working version of Pretzel work as a jupyter
| extension.
|
| The bigger reason we chose to fork was because down the line,
| we want to completely change the code execution model to being
| DAG based to allow for reproducible notebooks (similar to
| https://plutojl.org/ for eg). Similarly, we want to completely
| remove Codemirror and replace it with Monaco (the core editor
| engine in VSCode) to provide a more IDE like experience in
| Jupyter. These things simply couldn't have been done as
| extensions.
| TidbitsTornado wrote:
| This is a great implementation by your team + contributors.
| Simple but effective. And nice to see you've kept it open source
| instead of some other Show HN submissions where they take open
| source work, make is closed, change a few things, and claim
| they've created something great.
|
| Im curious to see if you continue building out some other
| features. While these are great features (copilot, chat, etc),
| I'd think most users would expect their IDE to have it out of the
| box (or with an extension) these days
| williamstein wrote:
| > And nice to see you've kept it open source instead of some
| other Show HN submissions where they take open source work,
| make is closed, change a few things, and claim they've created
| something great.
|
| The seem to have taken a "BSD-3-Clause" licensed project and
| change it to AGPLv3 licensed one. That's not the same thing,
| but it's similar to what you're concerned about.
| prasoonds wrote:
| This is true. All new added code is licensed under AGPLv3.
| But I fail to see how it's the same thing as modifying and
| open-source tool and re-selling it as a closed source tool.
| This is what AGPL gives us - anyone can use Pretzel however
| they want. They can even re-package and re-sell it if they
| want _so long_ as they too open-source their modifications
| and improvements.
|
| Selecting the license for an open-source tool backed by a
| company is tricky. You want your code to be open-source for
| it's benefits (for eg, for us one benefit is building trust
| with people working with sensitive data). But, the history of
| open-source tools is full of tools that another company just
| started reselling without doing any of the work (sentry,
| mongodb etc). So, you need to find a balance. AGPLv3 strikes
| the right balance for us.
| williamstein wrote:
| > ... I fail to see how it's the same thing ...
|
| You're right that it is not the same thing, which is why I
| wrote "That's not the same thing [...]" in the comment
| you're responding to. I have done the same as you guys
| (building an AGPLv3 or 'worse' product on top of BSD
| licensed code) many, many times! Anyway, what you're doing
| is really exciting!
| prasoonds wrote:
| That's fair, I was mostly responding to "but it's similar
| to what you're concerned about" because I didn't think
| the concerns are the same (but I can see your perspective
| on it too! While we are "giving away" the code, we're
| definitely posing some limitation).
|
| Thanks for the kind words, we're exited to be building
| this :)
| williamstein wrote:
| I would also be happy to video chat with you guys
| anytime, since we've built similar things over the years
| (wstein at sagemath.com).
| lmeyerov wrote:
| How does that even work, copying in and relicensing someone's
| bsd3 code as your own AGPLv3?
| prasoonds wrote:
| You're right, that's not how it would work! If you look at
| our license - all the Jupyter code stays BSD3. If we modify
| any BSD3 code, the modified code stays BSD3. All the _new
| code_ we write - in separate files - is AGPLv3 however and
| is clearly delineated in the repo in the file headers.
| prasoonds wrote:
| Thanks for the kind words. Keeping Pretzel open-source was
| important to us - partly for trust reasons. When most people
| use Jupyter, they do so with sensitive data. Making a closed
| source tool simply wouldn't work. _I_ wouldn 't have trusted a
| closed source jupyter alternative with my company's data unless
| the counterparty was huge and well-known.
|
| To your second point - completely agree that most users would
| expect these feature from their IDEs today. But, only two IDEs
| support Jupyter Notebooks: VSCode and PyCharm. You can
| certainly use them for notebook work but most AI extensions
| written for VSCode wouldn't be optimized for notebook work (for
| eg, GH Copilot apparently has difficulty completing code across
| different cells as per a friend). Secondly, to your point, this
| is just a start - we're going to be building a lot more Data
| Analysis specific features that don't exist in any IDE. I think
| there's a decent space for a tool a like this.
| williamstein wrote:
| There are many other Jupyter notebooks with extensive AI
| integration. These are less (or not at all) open source, but more
| mature in some ways, having been iterated on for over a year:
|
| - https://noteable.io/ -- pretty good, but then they got
| acquirehired out of existence
|
| - https://deepnote.com -- also extensive AI integration and
| realtime collaboration
|
| - https://github.com/jupyterlab/jupyter-ai -- a very nice
| standard open source extension for gen AI in Jupyter, from an
| Amazon. JupyterLab of course also has fairly mature realtime
| collaboration now.
|
| - https://colab.google/ -- has great AI integration but of course
| only with Google-hosted models
|
| - https://cocalc.com -- very extensive AI integration everywhere
| with all the main hosted models, mostly free or pay as you go;
| also has realtime collaboration. (Disclaimer: I co-authored
| this.)
|
| - VS Code has a great builtin Jupyter notebook, as other people
| have mentioned.
|
| Am I missing any?
| stared wrote:
| https://www.cursor.com/ - an AI-first VS Code clone
|
| VS Code (and Cursor) has so nice Jupyter support that I find it
| much better to use it for my workflow, rather than using any
| dedicated solution for Jupiter Notebooks only.
| dakshgupta wrote:
| IMO data scientists often are used to the jupyter form factor
| instead of the editor form factor, so I see why they would
| prefer this thing.
| stared wrote:
| I am a data scientist myself, one who moved from academia,
| vide https://p.migdal.pl/blog/2016/03/data-science-intro-
| for-math....
|
| I used Jupyter Notebook before it was popular and back when
| I was a PhD student. I pushed in a few places for the
| unorthodox way of exploring data in a browser. Now, I am
| back - but only thanks to wonderful code editors and their
| good support of Jupyter Notebooks. I recommend VSC (or,
| this year, Cursor) as the default environment for data sci.
| prasoonds wrote:
| That's a cool blogpost! I'm mostly using Cursor now (just
| waiting until someone makes a kick-ass Emacs package so I
| can switch back!) so I can definitely see your
| perspective.
|
| I'd be curious to hear a bit more about the kind of work
| you do that made you switch. Also if there's anything you
| miss in VSC/Cursor vs Jupyter. If you don't mind a small
| email exchange, let me know and I'll drop you a message
| :)
| prasoonds wrote:
| Agree with Daksh in the sibling comment. I think it's like
| you said - different people have different workflows and some
| might prefer using VSCode. IME though, most data scientists
| (and all data analysts) I worked with preferred using the
| company hosted internal Jupyter instance for their work.
|
| Also, as we build more features, we're definitely going in
| the direction of more analytics workloads (live
| collaboration, leaving comments, google-doc type versioning,
| fully AI driven analyses similar to OpenAI Interpreter mode
| etc) and with these features, I think there will be a clear
| divergence of feature-set in VSCode/PyCharm vs Pretzel.
|
| If I may ask, are you more on the engineering side (MLE) or
| more on the data side (Data Analyst)? EDIT: Just saw your
| other comment!
| prasoonds wrote:
| Thank you for the list - I think I've come across all of these
| in my research! I'll try highlight the differences for each.
|
| - https://noteable.io/ - as you say, it doesn't exist anymore
|
| - https://deepnote.com - Deepnote is closed source sadly - you
| can't run it locally, you can't tweak it, you need to learn a
| new interface _and_ switch to it
|
| - https://github.com/jupyterlab/jupyter-ai - I actually
| mentioned this in the post but in my experience, the UX and
| features are far behind what we've built already. I'd love to
| hear from anyone who's tried jupyter-ai to give us a shot and
| let me know what we're missing! The plus side of jupyter-ai is
| of course that it supports way more models and the codebase is
| a lot more hackable than what we've built.
|
| - https://colab.google/ - closed-source, similar challenges as
| with Deepnote. Another big challenge is that if you want to use
| Colab as a company, AFAICT, you need use their enterprise
| version (so that you can have native data collectors, support
| guarantees etc) and that only works with GCP so if you're an
| AWS shop, this might be a deal-breaker.
|
| - https://cocalc.com - hadn't used it so far but congrats on a
| great project! Will check it out. Didn't look in detail but
| first impressions makes it look like a fairly different
| interface from Jupyter. One of our goals was to go to where the
| users already are - that meant Jupyter. So that's definitely a
| major difference.
|
| - VSCode - as I've mentioned elsewhere, we're targeting a more
| of an analytics usecase with the features we're building.
| VSCode has AI features of course! But we'll look quite
| different once we build more items on the roadmap :)
| williamstein wrote:
| > Didn't look in detail but first impressions makes it look
| like a fairly different interface from Jupyter.
|
| That is correct, in that it is a completely different
| implementation. Unlike Deepnote and Colab, we try to maintain
| the same keyboard shortcuts and other semantics as
| JupyterLab, as much as we can.
|
| If you don't already, we would love it if you came to the
| JupyerLab weekly dev meeting and demoed pretzelai:
| https://hackmd.io/Y7fBMQPSQ1C08SDGI-fwtg?view People from
| Colab, VS Code, etc. regularly come to the meeting and demo
| their JupyterLab related notebook work, and it's really good
| for the community.
| prasoonds wrote:
| Oh cool! I'll definitely try to make it in one of the
| meetings :)
| spmurrayzzz wrote:
| marimo is very good, been using it for a few months now and
| have switched over to it for most of my notebook-related tasks
| (it ships with copilot support)
|
| https://github.com/marimo-team/marimo
| punkspider wrote:
| Not sure if https://hex.tech fits here?
| thehours wrote:
| > Am I missing any?
|
| DataSpell: https://www.jetbrains.com/dataspell
| dakshgupta wrote:
| Curious on why you went with Codestral for autocomplete, does it
| outperform other local models? How is the performance compared to
| GPT or Claude for autocomplete?
|
| Any plans to finetune Codestral for this specific usecase?
| prasoonds wrote:
| So, we were tipped off to Codestral being really good because
| of continue.dev - for reference, that's a VSCode extension that
| gives you similar features to Cursor. After we trialled it out
| head-to-head against GPT-4o for fill-in-the-middle completion,
| in my experience (purely vibe based), it produced better
| completions maybe twice as fast as GPT-4o.
|
| We haven't tried vs Sonnet 3.5 yet - my hunch is that on the
| speed/quality/cost space, Sonnet will end up doing better than
| Codestral for some folks.
|
| Against general purpose local models (taking Llama-70B as a
| high-water mark), Codestral does better far better on code
| related tasks while being less than 1/3rd the size (22B!). That
| said, I'm definitely exited to try out DeepSeek Coder v2 - by
| all reports, it's amazing model for code completion and will
| likely also beat out Codestral.
|
| I don't think we're planning to fine-tune Codestral though (or
| any model for that matter). The latest models keep on becoming
| faster, better _and_ cheaper AND they already work quite well.
| My thinking at this time is that waiting it out and having a
| big AI lab make a more capable general model is a better
| strategy.
| renewiltord wrote:
| I just use PyCharm and Copilot plugin. Works like a charm.
| prasoonds wrote:
| Yeah PyCharm and VSCode are definitely great options (though
| PyCharm is paid and VSCode AI extensions aren't notebook
| tailored). If you ever get a chance, I'd love to get your
| feedback on Pretzel - I think Codestral is a better and faster
| inline completion model than GH Copilot's GPT4 class model plus
| I think we might do context-relevant questions better :)
| renewiltord wrote:
| I'll give it a crack over the holiday. My primary mechanism
| is that I talk inline to the program.
| import pandas as pd # read a csv in with a pipe
| delimiter pd.read... # AI fills this in
|
| I saw your demo and I get the "Enter AI mode" thing but I
| like this flow, esp since I use ideavim. Perhaps the only
| improvement would be if I had another mode in the editor for
| AI that was keyboard accessible.
| prasoonds wrote:
| Interesting! We have a inline autocomplete as well (that's
| the one that uses Codestral). So in your example, you'd see
| a autocompletion prompt after you type pd.read and hitting
| tab will autocomplete the line (or many lines if there's
| enough context to generate that much text).
|
| The AI mode/box with Cmd + K is for more complex prompts,
| multi-line code chunks and such. We've tried to make
| _everything_ completely keyboard accessible btw, including
| the sidebar so that you never have to use the mouse :)
| mathiasn wrote:
| Have you seen Livebook? Best Jupyter Notebook ever!!
| https://livebook.dev/
| prasoonds wrote:
| Hey Mathias! Was fun chatting about Livebook the other day and
| yes, I'm definitely looking at it for inspiration! Alas it's an
| Elixir only notebook and so far as I know, there's very few
| data folks using Elixir so might be a hard sell.
| morsch wrote:
| These editors all focus on programming, does anybody have a
| recommendation for more general note-taking?
|
| I'd like to do things like organizing very rough notes, having
| them reformatted according to a general template, apply changes
| according to a prompt, maybe ask questions that refer to a
| collection of notes, ...
| chad1n wrote:
| I don't really get the appeal of this, I'd just use vscode with
| Jupyter if I really wanted "ai" integration since I can then
| access the whole ecosystem of extensions. The idea isn't that
| bad, but it lacks purpose.
| widepeepo8 wrote:
| Codeium(https://codeium.com/) already supports this, along with
| VSCode jupyter notebook extensions. It has 400k downloads on the
| VSCode extension store. don't really see the point of this when
| codeium already exists..
| prasoonds wrote:
| It's true - VSC family of editors (VSCode, Codeium and Cursor)
| all let you use AI autocomplete, question answering etc with
| your notebook code through various extensions. However, lots of
| Data Scientists and Analysts prefer using Jupyter Lab or Colab
| or in general notebook like interfaces. Plus, this is just a
| start - we're going to be adding way more features that will
| make the differentiation clearer (see my other comment earlier
| today: https://news.ycombinator.com/item?id=40858509)
|
| As of now though - there _are_ plenty of people who do use
| Jupyter and we hope Pretzel - as it stands today - can already
| be of help to them.
| superkuh wrote:
| At this point I'm almost afraid to ask but my attempts to figure
| it out have failed. What is a Jupyter notebook? Where is the code
| running? On your computer? On someone elses computer?
| vifon wrote:
| It's usually run as a local web application with the browser
| running on the same machine as the backend, though it's
| possible to bind it to non-localhost interfaces.
| prasoonds wrote:
| Ah sorry about this. This Show HN was targeted towards folks
| who have a passing familiarity with Jupyter Notebooks but I
| should have explained a bit more.
|
| Jupyter is a web application that lets a user execute python
| code in either a local or remote "kernel" - which is just a
| Python process with a communication layer.
|
| You can then do literate programming (meaning run some code and
| immediately see results including plots and tables) within this
| web interface. Basically a much fancier REPL. The code can run
| either locally on your machine or remotely in the
| aforementioned python "kernel".
|
| Here's a quick introduction to Notebooks that I found on
| YouTube: https://www.youtube.com/watch?v=jZ952vChhuI
| localfirst wrote:
| seems like the problem I am experiencing right now is that I'm
| overwhelmed by the sheer number of tools and choices its frankly
| exhausting
|
| there is a feeling that i can do anything and everything with AI
| but in reality I can't do anything because i can't prioritize and
| choose anymore due to choice fatigue
| prasoonds wrote:
| That's a fair concern and one I've been through myself. I think
| what we've tried to do is a little bit the opposite honestly -
| hundreds of thousands of people already use Jupyter for data
| work and we started with the idea to go where the users are
| precisely so that they don't have to switch tools.
|
| By making a fork that can be installed in one line of code,
| we're hoping that we don't make Jupyter user's go through
| decision fatigue for _yet another_ dev tool. Instead, the idea
| is to simply make their existing tool better.
___________________________________________________________________
(page generated 2024-07-02 23:00 UTC)