[HN Gopher] Tabby: Self-hosted AI coding assistant
___________________________________________________________________
Tabby: Self-hosted AI coding assistant
Author : saikatsg
Score : 80 points
Date : 2025-01-12 18:43 UTC (4 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| thecal wrote:
| Unfortunate name. Can you connect Tabby to the OpenAI-compatible
| TabbyAPI? https://github.com/theroyallab/tabbyAPI
| mbernstein wrote:
| At least per Github, the TabbyML project is older than the
| TabbyAPI project.
| mynameisvlad wrote:
| Also, _wildly_ more popular, to the tune of several
| magnitudes more forks and stars. If anything, this question
| should be asked of the TabbyAPI project.
| karolist wrote:
| I'm not sure what's going on with TabbyAPI's github
| metrics, but exl2 quants are very popular among nvidia
| local LLM crowd and TabbyAPI comes in tons of reddit posts
| of people using it. Might be just my bubble, not saying
| they're not accurate, just generally surprised such a
| useful project has under 1k stars. On the flip side, LLMs
| will hallucinate about TabbyML if you ask it TabbyAPI
| related questions, so I'd agree the naming is unfortunate.
| Medox wrote:
| I though that Tabby, the ssh client [1], got AI capabilities...
|
| [1] https://github.com/Eugeny/tabby
| wsxiaoys wrote:
| Never imagined our project would make it to the HN front page on
| Sunday!
|
| Tabby has undergone significant development since its launch two
| years ago [0]. It is now a comprehensive AI developer platform
| featuring code completion and a codebase chat, with a team [1] /
| enterprise focus (SSO, Access Control, User Authentication).
|
| Tabby's adopters [2][3] have discovered that Tabby is the only
| platform providing a fully self-service onboarding experience as
| an on-prem offering. It also delivers performance that rivals
| other options in the market. If you're curious, I encourage you
| to give it a try!
|
| [0]: https://www.tabbyml.com
|
| [1]: https://demo.tabbyml.com/search/how-to-add-an-embedding-
| api-...
|
| [2]: https://www.reddit.com/r/LocalLLaMA/s/lznmkWJhAZ
|
| [3]: https://www.linkedin.com/posts/kelvinmu_last-week-i-
| introduc...
| maille wrote:
| Do you have a plugin for MSVC?
| tootie wrote:
| Is it only compatible with Nvidia and Apple? Will this work
| with an AMD GPU?
| thih9 wrote:
| As someone unfamiliar with local AIs and eager to try, how does
| the "run tabby in 1 minute"[1] compare to e.g. chatgpt's free
| 4o-mini? Can I run that docker command on a medium specced
| macbook pro and have an AI that is comparably fast and capable?
| Or are we not there (yet)?
|
| Edit: looks like there is a separate page with instructions for
| macbooks[2] that has more context.
|
| > The compute power of M1/M2 is limited and is likely to be
| sufficient only for individual usage. If you require a shared
| instance for a team, we recommend considering Docker hosting with
| CUDA or ROCm.
|
| [1]: https://github.com/TabbyML/tabby#run-tabby-in-1-minute
| docker run -it --gpus all -p 8080:8080 -v $HOME/.tabby:/data
| tabbyml/tabby serve --model StarCoder-1B --device cuda --chat-
| model Qwen2-1.5B-Instruct
|
| [2]: https://tabby.tabbyml.com/docs/quick-
| start/installation/appl...
| eric-burel wrote:
| Side question : open source models tend to be less "smart" than
| private ones, do you intend to compensate by providing a better
| context (eg query relevant technology docs to feed context)?
| mjrpes wrote:
| What is the recommended hardware? GPU required? Could this run OK
| on an older Ryzen APU (Zen 3 with Vega 7 graphics)?
| jslakro wrote:
| Duplicated https://news.ycombinator.com/item?id=35470915
| leke wrote:
| So does this run on your personal machine, or can you install it
| on a local company server and have everyone in the company
| connect to it?
| wsxiaoys wrote:
| Tabby is engineered for team usage, intended to be deployed on
| a shared server. However, with robust local computing
| resources, you can also run Tabby on your individual machine.
| Check https://www.reddit.com/r/LocalLLaMA/s/lznmkWJhAZ to see
| it in action on a local setup with 3090.
___________________________________________________________________
(page generated 2025-01-12 23:00 UTC)