[HN Gopher] Self hosting a Copilot replacement: my personal expe...
___________________________________________________________________
Self hosting a Copilot replacement: my personal experience
Author : andreagrandi
Score : 14 points
Date : 2024-07-15 09:03 UTC (4 days ago)
(HTM) web link (www.andreagrandi.it)
(TXT) w3m dump (www.andreagrandi.it)
| cmpit wrote:
| I also wanted to try some local LLMs, but gave up and came to the
| same conclusion:
|
| "While the idea of having a personal and private instance of a
| code assistant is interesting (and can also be the only available
| option in certain environments), the reality is that achieving
| the same level of performance as GitHub Copilot is quite
| challenging.".
|
| But considering the pace at which AI and the ecosystem advances,
| things might change soon.
| tcdent wrote:
| Went through this same exercise this week and came to the same
| conclusion.
|
| After trying multiple open models, reconfiguring GPT-4o and
| seeing the speed and quality of the output was illuminating.
| NomDePlum wrote:
| I don't use Copilot so not able to compare but ollama +
| llama3:instruct + open-webui on a Mac Pro M2 is helpful when
| coding.
| rcarmo wrote:
| It really depends on the use case, and right now using Ollama for
| coding just isn't that useful. I can use gemma2 and phi3 just
| fine for general summarization and keyword extraction (including
| most of the stuff I need to do home automation with a "better
| Siri"--low bar, I know), but generating or autocompleting code is
| just another level entirely.
___________________________________________________________________
(page generated 2024-07-19 23:03 UTC)