[HN Gopher] Chrome's New Embedding Model: Smaller, Faster, Same ...
       ___________________________________________________________________
        
       Chrome's New Embedding Model: Smaller, Faster, Same Quality
        
       Author : kaycebasques
       Score  : 29 points
       Date   : 2025-05-13 14:39 UTC (8 hours ago)
        
 (HTM) web link (dejan.ai)
 (TXT) w3m dump (dejan.ai)
        
       | jbellis wrote:
       | TIL that Chrome ships an internal embedding model, interesting!
       | 
       | It's a shame that it's not open source, unlikely that there's
       | anything super proprietary in an embeddings model that's
       | optimized to run on CPU.
       | 
       | (I'd use it if it were released; in the meantime, MiniLM-L6-v2
       | works reasonably well. https://brokk.ai/blog/brokk-under-the-
       | hood)
        
         | vessenes wrote:
         | Agreed! On open source though - can't you just pull the model
         | and use the weights? I confess I have no idea what the
         | licensing would be for an open source-backed browser deploying
         | weights, but it seems like unless you made a huge amount of
         | money off it, it would be unproblematic, and even then could be
         | just fine.
        
       | darepublic wrote:
       | > Yes - Chromium now ships a tiny on-device sentence-embedding
       | model, but it's strictly an internal feature.
       | 
       | What it's for "History Embeddings." Since ~M-128 the browser can
       | turn every page-visit title/snippet and your search queries into
       | dense vectors so it can do semantic history search and surface
       | "answer" chips. The whole thing is gated behind two experiments:
       | 
       | ^ response from chatgpt
        
       | pants2 wrote:
       | What does Chrome use embeddings for?
        
       ___________________________________________________________________
       (page generated 2025-05-13 23:01 UTC)