[HN Gopher] Jan: An open source alternative to ChatGPT that runs...
___________________________________________________________________
Jan: An open source alternative to ChatGPT that runs on the desktop
Author : billybuckwheat
Score : 95 points
Date : 2024-03-21 18:56 UTC (4 hours ago)
(HTM) web link (jan.ai)
(TXT) w3m dump (jan.ai)
| ShamelessC wrote:
| Quite a lot of polish and bragging about stars and tweets for an
| open source project. Is there hidden monetization of some sort?
| Perhaps VC funding?
| anewhnaccount2 wrote:
| According to their page https://jan.ai/team/ they aim to
| bootstrap.
| ShamelessC wrote:
| Awesome thanks.
| afian wrote:
| Fantastic product and excellent team!
| thedangler wrote:
| I don't see anything about it reading local documents like exel,
| pdfs, or docs. Anyone see how this is accomplished?
| 0134340 wrote:
| Is it implied anywhere? That's a feature I'd love and also why
| I haven't bothered delving into LLMs very much; I didn't know
| there were any that could locally index your library and train
| on that data. I'd love to ask it a question and have it
| reference my local ebook library.
| throwitaway1123 wrote:
| This looks interesting. I would love a comparison between this
| product and LM Studio.
| ThrowawayTestr wrote:
| Where does the model come from?
| LordDragonfang wrote:
| Afaict, it doesn't have any inbuilt model, you just download
| one yourself or hook up to someone's API.
| dsp_person wrote:
| Scroll down on the main page:
|
| 01 Run local AI or connect to remote APIs
|
| 02 Browse and download models
| thesurlydev wrote:
| These kinds of apps are becoming dime a dozen. It would be nice
| to know how this one differentiates itself. Not obvious from the
| website.
| viraptor wrote:
| It seems like that until you actually try to use them. Not many
| are actually polished, support formatting, history, and
| multiple endpoints. There's lots of trivial apps abandoned
| after a few days, but what are the actually functional, good
| quality alternatives to this one? (That don't pass your
| query/answer through a third-party for data collection)
| extr wrote:
| I use https://www.typingmind.com/. It is paid, but I've found
| it to be a reliable front end to OpenAI/Claude/Google,
| supporting everything you mention. I haven't done any hyper
| detailed security audit but after watching network requests
| I'm pretty confident it's not sending my chats anywhere
| except to the relevant provider endpoints.
|
| Considering how much I use it, I've found it to be well worth
| the cost. The creator is pretty on top of model/API changes.
| karmajunkie wrote:
| i'll second that recommendation... i use it through the
| SetApp store and i've been very pleasantly surprised by its
| documentation and ability to work with most services.
| viraptor wrote:
| It's not a standalone app though. There's lots of web
| interfaces, but that's not the same. (I mean, it's a cool
| thing, but not what jan.ai is)
| moose44 wrote:
| Running LLMs locally always feels so awesome!
| pryelluw wrote:
| Would be nice if they listed system requirements. Their docs just
| say coming soon ...
| knodi123 wrote:
| most of their docs say coming soon. and their whole wiki.
|
| honestly feels like site this was launched a couple of days too
| soon.
| warkdarrior wrote:
| Their LLM is still generating copy for the website..
| christkv wrote:
| I got say I've been using LLM studio as it exposes the models in
| the ui as well as through a local open ai compatible server so I
| can test different models against my workflows locally.
| TheRealPomax wrote:
| Still hoping we'll eventually stop using Fibonacci to show off
| recursion, because that's one of those examples where the _maths_
| might be expressed as recursive relation, but the
| _implementation_ should never be =)
|
| Good AI would go "you don't want that, that's horribly
| inefficient. Here's an actually performant implementation based
| on the closed-form expression".
| zopa wrote:
| Nah, good AI would run in the compiler and optimize the
| recursion into something fast.
| pimlottc wrote:
| I'm going to assume this is not an Australian company...
|
| https://www.youtube.com/watch?v=2akt3P8ltLM
| badRNG wrote:
| Looks like they are based out of Singapore
| LeoPanthera wrote:
| Is this a fork of "LM Studio"? The UI is suspiciously similar,
| even down to the layout of the labels.
| onion2k wrote:
| I use Jan to run Mistral locally. It works well for what I need
| (which amounts to playing with models).
| FuriouslyAdrift wrote:
| Many LLMs may be run locally with GPT4All...
|
| https://gpt4all.io/
| LeoPanthera wrote:
| Unless I just can't find it, there seems to be no setting for
| customizing the prompt format for local models. You can edit the
| prompt itself, but not the format of the prompt or the subsequent
| messages. This would make using many models difficult, or give
| poor results, since they don't all use the same format.
| kkfx wrote:
| I try some LLM on my notes and well... They was unable to give me
| insights that are hard to spot, like follow the flaw of notes
| identifying patterns, find similar notes from the past and so on.
| In ALL cases classic tags/riprgrep full-text search was far
| quicker and equally or more effective.
|
| Long story short: LLMs might be useful on hyper big mass of
| information, like a new kind of search engine that try do achieve
| a semantic goal mimicking it. But not more than that IMVHO.
| Marginally LLMs might help computer-illiterate to manage their
| files, seen https://www.theverge.com/22684730/students-file-
| folder-direc... but I doubt they can go any further for the next
| 5+ years at least.
| lagrange77 wrote:
| I was wondering if it uses something like vLLM[0] or
| Llama.cpp[1].
|
| Seems to be Llama.cpp via 'Nitro', which was discussed here
| before [2].
|
| [0] https://github.com/vllm-project/vllm
|
| [1] https://github.com/ggerganov/llama.cpp
|
| [2] https://news.ycombinator.com/item?id=38887531
___________________________________________________________________
(page generated 2024-03-21 23:01 UTC)