[HN Gopher] Show HN: LocalScore - Local LLM Benchmark
       ___________________________________________________________________
        
       Show HN: LocalScore - Local LLM Benchmark
        
       Hey Folks!  I've been building an open source benchmark for
       measuring local LLM performance on your own hardware. The
       benchmarking tool is a CLI written on top of Llamafile to allow for
       portability across different hardware setups and operating systems.
       The website is a database of results from the benchmark, allowing
       you to explore the performance of different models and hardware
       configurations.  Please give it a try! Any feedback and
       contribution is much appreciated. I'd love for this to serve as a
       helpful resource for the local AI community.  For more check out: -
       Website: https://localscore.ai - Demo video:
       https://youtu.be/De6pA1bQsHU - Blog post:
       https://localscore.ai/blog - CLI Github:
       https://github.com/Mozilla-Ocho/llamafile/tree/main/localsco... -
       Website Github: https://github.com/cjpais/localscore
        
       Author : sipjca
       Score  : 76 points
       Date   : 2025-04-03 16:32 UTC (3 days ago)
        
 (HTM) web link (www.localscore.ai)
 (TXT) w3m dump (www.localscore.ai)
        
       | jborichevskiy wrote:
       | Congrats on launching!
       | 
       | Stoked to have this dataset out in the open. I submitted a bunch
       | of tests for some models I'm experimenting with on my M4 Pro.
       | Rather paltry scores compared to having a dedicated GPU but I'm
       | excited that running a 24B model locally is actually feasible at
       | this point.
        
       | mentalgear wrote:
       | Congrats on the effort - the local-first / private space needs
       | more performant AI, and AI in general needs more comparable and
       | trustworthy benchmarks.
       | 
       | Notes: - Olama integration would be nice - Is there an anonymous
       | federated score sharing? That way, users you approximate a
       | model's performance before downloading it.
        
       | alchemist1e9 wrote:
       | Really awesome project!
       | 
       | Clicking on GPU is a nice simple visualization. I was thinking
       | maybe try to put that type of visual representation intuitively
       | accessible immediately on the landing page.
       | 
       | cpubenchmark.net could he an example technique of drawing the
       | site visitor into the paradigm.
        
       | roxolotl wrote:
       | This is super cool. I finally just upgraded my desktop and one
       | thing I'm curious to do with it is run local models. Of course
       | the ram is late so I've been googling trying to get an idea of
       | what I could expect and there's not much out there to compare to
       | unless you're running state of the art stuff.
       | 
       | I'll make sure to run contribute my benchmark to this once my ram
       | comes in.
        
       | jsatok wrote:
       | Contributed scores for the M3 Ultra 512 GB unified memory:
       | https://www.localscore.ai/accelerator/404
       | 
       | Happy to test larger models that utilize the memory capacity if
       | helpful.
        
       | ftbsqcfjm wrote:
       | Interesting approach to making local recommendations more
       | personalized and relevant. I'm curious about the cold start
       | problem for new users and how the platform handles privacy.
       | Partnering with local businesses to augment data could be a smart
       | move. Will be watching to see how this develops!
        
       | omneity wrote:
       | This is great, congrats for launching!
       | 
       | A couple of ideas .. I would like to benchmark a remote headless
       | server, as well as different methods to run the LLM (vllm vs tgi
       | vs llama.cpp ...) on my local machine, and in this case llamafile
       | is quite limiting. Connecting over an OpenAI-like API instead
       | would be great!
        
       ___________________________________________________________________
       (page generated 2025-04-06 23:01 UTC)