[HN Gopher] Enhancing Code Completion for Rust in Cody
       ___________________________________________________________________
        
       Enhancing Code Completion for Rust in Cody
        
       Author : imnot404
       Score  : 63 points
       Date   : 2024-06-13 19:12 UTC (5 days ago)
        
 (HTM) web link (sourcegraph.com)
 (TXT) w3m dump (sourcegraph.com)
        
       | shahahmed wrote:
       | arguably you can reduce even more latency by keeping the model
       | on-device as well, but that would mean revealing the weights of
       | the fine-tuned model.
       | 
       | If the user preferred reduced latency and had the RAM, is that an
       | option?
        
         | s1mplicissimus wrote:
         | the model is probably most of the "secret sauce" of cody, so if
         | they gave that away people could just copy it around like mp3s.
         | my guess
        
           | morgante wrote:
           | Completely incorrect, as Sourcegraph has not historically
           | trained models and Cody swaps between many open source and
           | 3rd party models.
        
         | rdedev wrote:
         | Looking at their GitHub page or seems like they are using
         | existing LLM services. It should be possible to modify cody to
         | make it work with a local llm
        
         | daemonologist wrote:
         | This is true, but only if you have a GPU (/accelerator)
         | comparable in performance to the one backing the service, or at
         | least comparable after accounting for the local benefit. This
         | is an expensive proposition because it will be sitting idle
         | between completions and when you're not coding.
        
       | sqs wrote:
       | Awesome to see our Rust fine-tune here on HN. We (Sourcegraph
       | team) are here if anyone has questions or ideas for us!
       | 
       | BTW, Cody is open source (Apache 2):
       | https://github.com/sourcegraph/cody.
        
         | IshKebab wrote:
         | Your Cody page doesn't answer a very obvious question: does the
         | LLM run locally or is this going to send all my code to
         | Sourcegraph? I assume that is a deliberate omission and the
         | answer is the latter.
        
           | jdorfman wrote:
           | The default LLM is Claude 3 Sonnet hosted by Anthropic.
           | However you can run local models via Ollama:
           | https://sourcegraph.com/docs/cody/clients/install-
           | vscode#sup...
        
         | tamimio wrote:
         | Thank you, cody.dev kept giving me a white blank page.
        
           | mutematrix wrote:
           | Sorry about that! I just looked into this (also on the
           | Sourcegraph team). If you get a chance, could you check if
           | it's working now?
        
       ___________________________________________________________________
       (page generated 2024-06-18 23:00 UTC)