[HN Gopher] Ava PLS: open-source application for running languag...
___________________________________________________________________
Ava PLS: open-source application for running language models
locally
Author : cztomsik
Score : 61 points
Date : 2023-12-08 18:53 UTC (1 days ago)
(HTM) web link (avapls.com)
(TXT) w3m dump (avapls.com)
| iFire wrote:
| LICENSE - MIT
|
| https://github.com/cztomsik/ava/blob/main/LICENSE.md
| codetrotter wrote:
| In other words, the best family of licenses in the world
| rgbrgb wrote:
| looks nice, congrats on open sourcing it cztomsik! Good to have
| another reference implementation, excited to dig through and
| check out how you're integrating llama.cpp and SwiftUI. I went
| with an architecture based on hitting examples/server on
| localhost.
|
| Would you be interested in adding with otherbrain [0] to help
| build an open human feedback data set? Happy to help with PRs if
| so.
|
| If you're interested in supporting open model training, we'd love
| to have you!
|
| [0]: https://www.otherbrain.world/human-feedback
| cztomsik wrote:
| It goes a bit against the original idea, this should be 100%
| air-gapped. No phoning home, no data-collection, everything is
| private.
| rgbrgb wrote:
| Makes sense, thanks for considering (and fwiw I'm also
| building with a private AI / offline philosophy). In my
| implementation, the compromise is 1) being super loud with an
| alert warning the user when they're sharing feedback and 2)
| providing a setting where the user can turn off feedback
| buttons. I had gotten user feedback around wanting to tell
| the AI when it was doing a good/bad job and this seemed like
| a privacy preserving compromise that could actually improve
| models. Anyway, good luck and congrats again!
| haolez wrote:
| I'll give it a try. I hope it's better than gpt4all, which I
| found out to be very buggy.
| cztomsik wrote:
| It should work fine if you're on Apple silicon, everything else
| is unfortunately second-class citizen, because I don't have any
| other devices and it's currently just me working on it. But if
| you'd like to maintain the windows, linux or intel-based mac
| releases, ping me on discord and I'd be happy to discuss and to
| accept PRs.
| rgbrgb wrote:
| edit: [deleted]
| cztomsik wrote:
| Intel works in Ava, it's just slow, for some reason.
| rgbrgb wrote:
| ah, got it. yeah the llama.cpp speeds I've seen are like
| 5-10 tokens/second on intel but I've been impressed with
| the relatively ancient hardware llama.cpp works on (2017
| iMac with 8GB RAM at 5.5 t/s with a 7B model!).
| cztomsik wrote:
| it's ok, you can put the link back I don't mind the
| competition :)
|
| yes, I know it definitely works, so it has to be some
| stupid mistake but it's impossible to reproduce it
| without actual intel mac :-/
|
| I had enough trouble with supporting monterey, which also
| does not work 100% AFAIK
| dang wrote:
| We changed the URL from https://tomsik.cz/posts/ava-oss/ to the
| project page, because it doesn't look like this project has been
| discussed on HN before. Readers might want to look at both links!
| cztomsik wrote:
| Actually, it was, here's the initial thread:
| https://news.ycombinator.com/item?id=37561085
|
| The big news here is that it's now open-source:
| https://github.com/cztomsik/ava
___________________________________________________________________
(page generated 2023-12-09 23:00 UTC)