[HN Gopher] Fine-Tuning LLMs to 1.58bit
       ___________________________________________________________________
        
       Fine-Tuning LLMs to 1.58bit
        
       Author : galeos
       Score  : 44 points
       Date   : 2024-09-18 15:33 UTC (7 hours ago)
        
 (HTM) web link (huggingface.co)
 (TXT) w3m dump (huggingface.co)
        
       | amilios wrote:
       | Very exciting, although it was a bit disappointing to see that
       | they're hitting just llama1 7b performance by quantizing llama3.
       | but i'm sure the performance gap will close over time!
        
       | patleeman wrote:
       | That's awesome. The original discussion of bitnet made it seem
       | like you needed to train a model from scratch but its neat they
       | were able to adapt an existing model. This is quite exciting.
        
       ___________________________________________________________________
       (page generated 2024-09-18 23:02 UTC)