[HN Gopher] Exploring LoRA - Part 1: The Idea Behind Parameter E...
       ___________________________________________________________________
        
       Exploring LoRA - Part 1: The Idea Behind Parameter Efficient Fine-
       Tuning
        
       Author : aquastorm
       Score  : 139 points
       Date   : 2024-12-23 12:19 UTC (2 days ago)
        
 (HTM) web link (medium.com)
 (TXT) w3m dump (medium.com)
        
       | threepi wrote:
       | Author here. Happy to see this posted here. This is actually a
       | series of blog posts:
       | 
       | 1. Exploring LoRA -- Part 1: The Idea Behind Parameter Efficient
       | Fine-Tuning and LoRA:
       | https://medium.com/inspiredbrilliance/exploring-lora-part-1-...
       | 
       | 2. Exploring LoRA - Part 2: Analyzing LoRA through its
       | Implementation on an MLP:
       | https://medium.com/inspiredbrilliance/exploring-lora-part-2-...
       | 
       | 3. Intrinsic Dimension Part 1: How Learning in Large Models Is
       | Driven by a Few Parameters and Its Impact on Fine-Tuning
       | https://medium.com/inspiredbrilliance/intrinsic-dimension-pa...
       | 
       | 4. Intrinsic Dimension Part 2: Measuring the True Complexity of a
       | Model via Random Subspace Training
       | https://medium.com/inspiredbrilliance/intrinsic-dimension-pa...
       | 
       | Hope you enjoy reading the other posts too. Merry Christmas and
       | Happy Holidays!
        
         | 3abiton wrote:
         | Thanks for sharing. This got me thinking, why is medium so used
         | for such technical articles? Especially that lots of articles
         | get blasted behind a paywall for me recently.
        
       | gautambt wrote:
       | Generated notebooklm here:
       | https://notebooklm.google.com/notebook/7094a513-af83-4c5b-a4...
        
         | khazhoux wrote:
         | What is this? Is this a google summarization service?
        
       | jwildeboer wrote:
       | (Not to be confused with LoRa, (short for long range) which is a
       | spread spectrum modulation technique derived from chirp spread
       | spectrum (CSS) technology, powering technologies like LoRaWAN and
       | Meshtastic)
        
         | FusspawnUK wrote:
         | really wish they had come up with another name. googling gets
         | annoying
        
           | the__alchemist wrote:
           | Contributors: They both use mixed capitalization. They have
           | partially-overlapping audiences.
        
         | SeasonalEnnui wrote:
         | This gets me every time. I expect to see something interesting
         | and it turns to be the other one. One is a fantastic thing and
         | the other is mediocre, pick which way round at your discretion!
        
           | sva_ wrote:
           | Pretty simple to spot LoRa vs LoRA.
        
           | pavlov wrote:
           | What exactly is the confusion? Does "parameter efficient
           | fine-tuning" mean anything in context of the other Lora? If
           | not, then it's probably obvious which one this is about.
        
             | mrgaro wrote:
             | Actually it does: Lora the radio protocol has parameters to
             | tune. Usually both sender and receiver needs to match
             | these, so I read this like a method how these could be
             | automatically tuned based on the distance and radio
             | environment.
        
       | danielhanchen wrote:
       | Super cool series of articles! :)
        
       ___________________________________________________________________
       (page generated 2024-12-25 23:01 UTC)