Post AcaKxsXdoAWyug9TsW by ben@ischool.social
(DIR) More posts by ben@ischool.social
(DIR) Post #AcaKx4oagsNcglhJM8 by ben@ischool.social
2023-12-08T01:04:39Z
0 likes, 0 repeats
@simon is there a llamafile for llava 1.5 13B? Your blog post is awesome, and now I'm hoping to try the larger model.
(DIR) Post #AcaKxWP27UZcFbxIPI by freakazoid@retro.social
2023-12-08T01:06:48Z
0 likes, 0 repeats
@ben @simon What format? There are lots. https://huggingface.co/models?sort=trending&search=llava+1.5+13b
(DIR) Post #AcaKxaCQ0KrM1e5HIe by ben@ischool.social
2023-12-08T01:23:59Z
0 likes, 0 repeats
@freakazoid @simon specifically Justine Tunney's llamafile format: https://simonwillison.net/2023/Nov/29/llamafile/
(DIR) Post #AcaKxdDamQa5O9el2u by freakazoid@retro.social
2023-12-08T01:30:45Z
0 likes, 0 repeats
@ben @simon Oh, neat! It appears to be a self-executing GGUF file.There are instructions on that page for creating a llamafile from a GGUF file. I found one GGUF quantization of LLaVa 1.5 13b. I'd recommend using the Q5_K_M version. https://huggingface.co/PsiPi/liuhaotian_llava-v1.5-13b-GGUF/tree/main
(DIR) Post #AcaKxjj0aXHBZKOvqa by freakazoid@retro.social
2023-12-08T01:36:30Z
0 likes, 0 repeats
@ben @simon Err, actually I guess the instructions for making a llamafile are in the llamafile README. https://github.com/Mozilla-Ocho/llamafile
(DIR) Post #AcaKxsXdoAWyug9TsW by ben@ischool.social
2023-12-08T01:38:02Z
0 likes, 0 repeats
@freakazoid @simon double thanks
(DIR) Post #AcaKy1FBSA79u2aMrI by simon@fedi.simonwillison.net
2023-12-08T02:04:56Z
0 likes, 0 repeats
@ben @freakazoid you should be need to make a new llamafile to try it out if you download the GGUF and run it like this: https://simonwillison.net/2023/Nov/29/llamafile/#llamafile-trying-other-models