[HN Gopher] SlowLlama: Finetune llama2-70B and codellama on MacB...
___________________________________________________________________
SlowLlama: Finetune llama2-70B and codellama on MacBook Air without
quantization
Author : behnamoh
Score : 38 points
Date : 2023-10-06 21:46 UTC (1 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| SillyUsername wrote:
| So I keep getting told "Macs don't use memory like Windows", "8GB
| is fine for everything on Mac" (with quotes from people who use
| it to Photoshop) and "the SSD is so fast if you use virtual
| memory anyway it's not noticeable"... Is 8GB suitable for LLM use
| like this, and has anybody actually used it?
| RockRobotRock wrote:
| 8 GB on a MacBook is kinda rough. I regret it.
| narrator wrote:
| I love that I can now do an LLM fine tune on my local Macbook.
| IMHO, this is why Facebook open sourced Llama. Rockstar
| programmers like this guy are going to make it scale out on cheap
| hardware and give them an edge over even their well funded
| competitors.
___________________________________________________________________
(page generated 2023-10-06 23:00 UTC)