Post AchUVVBTPtIS0jbtxo by stsquad@mastodon.org.uk
(DIR) More posts by stsquad@mastodon.org.uk
(DIR) Post #AchUVVBTPtIS0jbtxo by stsquad@mastodon.org.uk
2023-12-10T19:33:57Z
0 likes, 1 repeats
I've finally finished a #blog post I started about 2 weeks ago on my experiences with #llm's like #chatgpt. Unlike the last post it was 100% written by me which explains why its taken so long to finish 😄 It's very much from the perspective of someone who's a novice still learning about how these things work: https://www.bennee.com/~alex/blog/2023/12/10/a-systems-programmers-perspectives-on-generative-ai/(replies to this comment appear on the blog)
(DIR) Post #AchUVi4rTmjRzzcIxk by penguin42@mastodon.org.uk
2023-12-11T12:56:53Z
0 likes, 0 repeats
@stsquad AI stuff is interesting; I've played with the free ChatGPT's & Bard; they seem to be pretty good at explaining or searching for stuff;and yes I've seen others suggest use for review. That arm version conversion you mention is impressive; you can't blame it getting unstuck on an MMU! I'm tempted to try a local run; I'm being told that the llama.cpp runs OK on a 32G RAM host even without GPU. I find the way the parameter types shrink from f32->f16->f8->i4 fascinating architecturally.
(DIR) Post #AchYAG7gcMsZ2ExtRI by stsquad@mastodon.org.uk
2023-12-11T13:37:57Z
0 likes, 0 repeats
@penguin42 the quantised 7B models run pretty well in my experience if you have enough RAM and cores. Certainly enough to experiment with.