[HN Gopher] Kagi Assistants
       ___________________________________________________________________
        
       Kagi Assistants
        
       Author : ingve
       Score  : 91 points
       Date   : 2025-11-20 20:30 UTC (2 hours ago)
        
 (HTM) web link (blog.kagi.com)
 (TXT) w3m dump (blog.kagi.com)
        
       | jryio wrote:
       | I think there's a very important nugget here unrelated to agents:
       | Kagi as a search engine is a higher signal source of information
       | than Google page rank and ad sense funded model. Primarily
       | because google as it is today includes a massive amount of noise
       | and suffered from blowback/cross-contamination as more LLM
       | generated content pollute information truth.
       | 
       | > We found many, many examples of benchmark tasks where the same
       | model using Kagi Search as a backend outperformed other search
       | engines, simply because Kagi Search either returned the relevant
       | Wikipedia page higher, or because the other results were not
       | polluting the model's context window with more irrelevant data.
       | 
       | > This benchmark unwittingly showed us that Kagi Search is a
       | better backend for LLM-based search than Google/Bing because we
       | filter out the noise that confuses other models.
        
         | bitpush wrote:
         | > Primarily because google as it is today includes a massive
         | amount of noise and suffered from blowback/cross-contamination
         | as more LLM generated content pollute information truth.
         | 
         | I'm not convinced about this. If the strategy is "lets return
         | wikipedia.org as the most relevant result", that's not
         | sophisticated at all. Infact, it only worked for a very narrow
         | subset of queries. If I search for 'top luggages for solo
         | travel', I dont want to see wikipedia and I dont know how kagi
         | will be any better.
        
           | VHRanger wrote:
           | (Kagi staff here)
           | 
           | Generally we do particularly better on product research
           | queries [1] than other categories, because most poor review
           | sites are full of trackers and other stuff we downrank.
           | 
           | However there aren't public benchmarks for us to brag about
           | on product search, and frankly the simpleQA digression in
           | this post made it long enough it was almost cut.
           | 
           | 1. (Except hyper local search like local restaurants)
        
           | viraptor wrote:
           | The wrote "returned the relevant Wikipedia page higher" and
           | not "wikipedia.org as the most relevant result" - that's an
           | important distinction. There are many irrelevant Wikipedia
           | pages.
        
         | clearleaf wrote:
         | Maybe if Google hears this they will finally lift a finger
         | towards removing garbage from search results.
         | 
         | Hey Google, Pinterest results are probably messing with AI
         | crawlers pretty badly. I bet it would really help the AI if
         | that site was deranked :)
         | 
         | Also if this really is the case, I wonder what an AI using
         | Marginalia for reference would be like.
        
           | viraptor wrote:
           | > Maybe if Google hears this they will finally lift a finger
           | towards removing garbage from search results.
           | 
           | It's likely they can filter the results for their own agents,
           | but will leave other results as they are. Half the issue with
           | normal results are their ads - that's not going away.
        
           | sroussey wrote:
           | There are several startups providing web search solely for ai
           | agents. Not sure any agent uses Google for this.
        
           | MangoToupe wrote:
           | > Maybe if Google hears this they will finally lift a finger
           | towards removing garbage from search results.
           | 
           | They spent the last decade and a half encouraging the
           | proliferation of garbage via "SEO". I don't see this
           | reversing.
        
       | daft_pink wrote:
       | Not for nothing, but I wish there was an anonymized ai built into
       | a kagi that was able to have normal conversation discussion about
       | sexual topics or search for pornographic topics like a safe
       | search off function.
       | 
       | I understand the safety needs around things LLM should not build
       | nuclear weapons, but it would be nice to have a frontier model
       | that could write or find porn.
        
         | VHRanger wrote:
         | You'll want de-censored models like cydonia for that -- can be
         | found on openrouter, or through something like msty
        
       | HotGarbage wrote:
       | I really wish Kagi would focus on search and not waste time and
       | money on slop.
        
         | 0x1ch wrote:
         | This is building on top of the existing core product, so the
         | output is directly tied to the quality of their core search
         | results being fed into the assistants. I overall really enjoy
         | all of their A.I products, using their prompt assistant
         | frequently for quick research tasks.
         | 
         | It does miss occasionally, or I feel like "that was a waste of
         | tokens" due to a bad response or something, but overall I like
         | supporting Kagi's current mission in the market of AI tools.
        
         | VHRanger wrote:
         | It's not -- this was posted literally yesterday as a position
         | statement on the matter (see early paragraphs in OP):
         | 
         | https://blog.kagi.com/llms
         | 
         | Kagi is treating LLMs as potentially useful tools to be used
         | with their deficiencies in mind, and with respect of user
         | choices.
         | 
         | Also, we're explicitly fighting against slop:
         | 
         | https://blog.kagi.com/slopstop
        
         | drewda wrote:
         | What they saying in this post is that they are designing these
         | LLM-based features to support search.
         | 
         | The post describes how their use-case is finding high quality
         | sources relevant to a query and providing summaries with
         | references/links to the user (not generating long-form
         | "research reports")
         | 
         | FWIW, this aligns with what I've found ChatGPT useful for: a
         | better Google, rather than a robotic writer.
        
           | theoldgreybeard wrote:
           | I'm sure Google also says they built "AI mode" to "support
           | search".
           | 
           | Their search is still trash.
        
             | esafak wrote:
             | Except the AI mode filters out the bad results for you :)
        
         | barrell wrote:
         | If you look at my post history, I'm the last person to defend
         | LLMs. That being said, I think LLMs are the next evolution in
         | search. Not what OpenAI and Anthropic and xAI are working on -
         | I think all the major models are moving further and further
         | away from that with the "AI" stuff. But the core technology is
         | an amazing way to search.
         | 
         | So I actually find it the perfect thing for Kagi to work with.
         | If they can leverage LLMs to improve search, without getting
         | distracted by the "AI" stuff, there's tons of potential value,
         | 
         | Not saying that's what this is... but if there's any company
         | I'd want playing with LLMs it's probably Kagi
        
           | skydhash wrote:
           | A better search would be rich metadata and powerful filter
           | tools, not result summarizer. When I search, I want to find
           | stuff, I don't want an interpretation of what was found.
        
         | bigstrat2003 wrote:
         | Same, though in fairness as long as they don't force it on me
         | (the way Google does) and as long as the real search results
         | don't suffer because of a lack of love (which so far they
         | haven't), then it's no skin off my back. I think LLMs are an
         | abysmal tool for finding information, but as long as the actual
         | search feature is working well then I don't care if an LLM
         | option exists.
        
       | itomato wrote:
       | I'm seeing a lot of investment in these things that have a short
       | shelf life.
       | 
       | Agents/assistants but nothing more.
        
         | VHRanger wrote:
         | We're building tools that we find useful, and we hope others
         | find it too. See notes on our view of LLMs and their flaws:
         | 
         | https://blog.kagi.com/llms
        
         | ugurs wrote:
         | Why do you think the shelf life is short?
        
       | natemcintosh wrote:
       | As a Kagi subscriber, I find this to be mostly useful. I'd say I
       | do about 50% standard Kagi searches, 50% Kagi assistant
       | searches/conversations. This new ability to change the level of
       | "research" performed can be genuinely useful in certain contexts.
       | That said, I probably expect to use this new "research assistant"
       | once or twice a month.
        
         | VHRanger wrote:
         | I'd say the most useful part for me is appending ? / !quick /
         | !research directly from the browser search bar to a query
        
       | ceroxylon wrote:
       | Kagi reminds me of the original search engines of yore, when I
       | could type what I want and it would appear, and I could go on
       | with my work/life.
       | 
       | As for the people who claim this will create/introduce slop, Kagi
       | is one of the few platforms where they are actively fighting
       | against low quality AI generated content with their community
       | fueled "SlopStop" campaign.[0]
       | 
       | Not sponsored, just a fan. Looking forward to trying this out.
       | 
       | [0] https://help.kagi.com/kagi/features/slopstop.html
        
       | iLoveOncall wrote:
       | The fact that people applaud Kagi taking the money they gave for
       | search to invest it in bullshit AI products and spit on Google's
       | AI search at the same time tells you everything you need to know
       | about HackerNews.
        
         | VHRanger wrote:
         | We're explicitly conscious of the bullshit problem in AI and we
         | try to focus on only building tools we find useful. See
         | position statement on the matter yesterday:
         | 
         | https://blog.kagi.com/llms
        
           | iLoveOncall wrote:
           | Your words don't match your actions.
           | 
           | And to be clear you shouldn't build the tools that YOU find
           | useful, you should build the tools that your users, which pay
           | for a specific product, find useful.
           | 
           | You could have LLMs that are actually 100% accurate in their
           | answers that it would not matter at all to what I am raising
           | here. People are NOT paying Kagi for bullshit AI tools,
           | they're paying for search. If you think otherwise, prove it,
           | make subscriptions entirely separate for both products.
        
             | freediver wrote:
             | Kagi founder here. We are moving to a future where these
             | subscriptions will be separate. Even today more that 80% of
             | our members use Kagi Assistant and our other AI-supported
             | products so saying "people are NOT paying Kagi for bullshit
             | AI tools" is not accurate, mostly in the sense that we are
             | not in the business of creating bullshit tools. Life is too
             | short for that. I also happen to like Star Trek version of
             | the future, where smart computers we can talk to exist. I
             | also like that Star Trek is still 90% human drama, and 10%
             | technology quitely working in the background in service of
             | humans - and this is the kind of future I would like to
             | build towards and leave for my children. Having the most
             | accurate search in the world that has users' best interest
             | in mind is a big part of it, and that is not going
             | anywhere.
        
               | iLoveOncall wrote:
               | > I also happen to like Star Trek version of the future,
               | where smart computers we can talk to exist [...], this is
               | the kind of future I would like to build towards
               | 
               | Well if that doesn't seal the deal in making it clear
               | that Kagi is not about search anymore, I don't know what
               | does. Sad day for Kagi search users, wow!
               | 
               | > Having the most accurate search in the world that has
               | users' best interest in mind is a big part of it
               | 
               | It's not, you're just trying to convince yourself it is.
        
           | grayhatter wrote:
           | > LLMs are bullshitters. But that doesn't mean they're not
           | useful
           | 
           | > Note: This is a personal essay by Matt Ranger, Kagi's head
           | of ML
           | 
           | I appreciate the disclaimer, but never underestimate
           | someone's inability to understand something, when their job
           | depends on them not understanding it.
           | 
           | Bullshit isn't useful to me, I don't appreciate being lied
           | to. You might find use in declaring the two different, but
           | sufficiently advanced ignorance (or incompetence) is
           | indistinguishable from actual malice, and thus they should be
           | treated the same.
           | 
           | Your essay, while well written, doesn't do much to convince
           | me any modern LLM has a net positive effect. If I have to
           | duplicate all of it's research to verify none of it is
           | bullshit, which will only be harder after using it given the
           | anchoring and confirmation bias it will introduce... why?
        
         | w10-1 wrote:
         | Do you have any evidence that the AI efforts are not being
         | funded by the AI product, Kagi Assistant? I would expect the
         | reverse: the high-margin AI products are likely cross-
         | subsidizing the low-margin search products and their sliver of
         | AI support.
        
           | stefan_ wrote:
           | High-margin AI products? Yes the world is just filled with
           | those!
        
       | bananapub wrote:
       | regular reminder: kagi is - above all else - a really really good
       | search engine, and if google/etc, or even just the increasingly
       | horrific ads-ocracy make you sad, you should definitely give it a
       | go - the trial is here: https://kagi.com/pricing
       | 
       | if you like it, it's only $10/month, which I regrettably spend on
       | coffee some days.
        
         | skydhash wrote:
         | I now that the price haven't changed for a while, but I would
         | pay for unlimited search and no AI.
        
         | iLoveOncall wrote:
         | > above all else
         | 
         | What they've been building for the past couple of years makes
         | it blindingly clear that they are definitely not a search
         | engine *above all else*.
         | 
         | Don't believe me? Check their CEO's goal:
         | https://news.ycombinator.com/item?id=45998846
        
       | AuthAuth wrote:
       | Kagi is already expensive for a search engine. Now I know part of
       | my subscription is going towards funding AI bullshit. And I know
       | the cost of that AI bullshit will get jacked up in price and
       | force Kagi sub price up as well. I'm so tired of AI being forced
       | into everything.
        
         | progval wrote:
         | These are only available on the Ultimate tier. If (like me) you
         | don't care about the LLMs then there is no reason to be on the
         | Ultimate tier so you don't pay for it.
        
         | johnnyanmac wrote:
         | >expensive for a search engine.
         | 
         | As in, not "free"?
         | 
         | Either way, I guess we'll see how this affects the service.
        
       | ranyume wrote:
       | I used quick research and it was pretty cool. A couple of caveats
       | to keep in mind:
       | 
       | 1. It answers using only the crawled sites. You can't make it
       | crawl a new page. 2. It doesn't use a page' search function
       | automatically.
       | 
       | This is expected, but doesn't hurt to take that in mind. I think
       | i'd be pretty useful. You ask for recent papers on a site and the
       | engine could use hackernews' search function, then kagi would
       | crawl the page.
        
       | smallerfish wrote:
       | I tried a prompt that consistently gets Gemini to badly
       | hallucinate, and it responded correctly.
       | 
       | Prompt: "At a recent SINAC conference (approx Sept 2025) the
       | presenters spoke about SINAC being underresourced and in crisis,
       | and suggested better leveraging of and coordination with NGOs.
       | Find the minutes of the conference, and who was advocating for
       | better NGO interaction."
       | 
       | The conference was actually in Oct 2024. The approx date in
       | parens causes Gemini to create an entirely false narrative, which
       | includes real people quoted out of context. This happens in both
       | Gemini regular chat and Gemini Deep Research (in which the
       | narrative gets badly out of control).
       | 
       | Kagi reasonably enough answers: "I cannot find the minutes of a
       | SINAC conference from approximately September 2025, nor any
       | specific information about presenters advocating for better NGO
       | coordination at such an event."
        
       ___________________________________________________________________
       (page generated 2025-11-20 23:00 UTC)