[HN Gopher] Llama2.c running on a Silicon Graphics Indigo2 works...
___________________________________________________________________
Llama2.c running on a Silicon Graphics Indigo2 workstation
Author : lambdaba
Score : 15 points
Date : 2024-01-22 19:57 UTC (3 hours ago)
(HTM) web link (twitter.com)
(TXT) w3m dump (twitter.com)
| xenonite wrote:
| Amazing!
|
| Training the model, although it is small with only 15M
| parameters, takes quite some compute power. I wonder if the
| resources would have been sufficient at the time. Also, I wonder
| if disk space would have been sufficient to store the input
| corpus.
| andy99 wrote:
| It looks like these machines run IRIX which is a unix variant on
| a mips processor and has some parallelization features. I wonder
| if the compiler he's using is taking advantage of these, or if
| there's potentially some speedup possible. From what I remember,
| llama2.c uses OMP to optionally parallelize some of the matrix-
| vector products which wouldn't be available on that machine.
|
| https://en.m.wikipedia.org/wiki/IRIX
| mechagodzilla wrote:
| It's interesting to think about some of the earliest machines we
| could have run a pretty great LLM on. A Cray XMP-EA from 1986
| could address up to 2GW, which would have been 16GB. The XMP or
| earlier Cray-1 lines could also use multiple solid-state SRAM
| disks that were up to 1GB in size and could stream at ~1GB/sec.
| 1oooqooq wrote:
| answering the question in the xweet: researcher 28 years ago
| would have decent reproducible peer reviewed metrics so they
| would not even look at most of this.
___________________________________________________________________
(page generated 2024-01-22 23:01 UTC)