[HN Gopher] Interview with Viktor Lofgren from Marginalia Search
       ___________________________________________________________________
        
       Interview with Viktor Lofgren from Marginalia Search
        
       Author : luu
       Score  : 21 points
       Date   : 2023-11-30 07:47 UTC (15 hours ago)
        
 (HTM) web link (nlnet.nl)
 (TXT) w3m dump (nlnet.nl)
        
       | OfSanguineFire wrote:
       | I would if the arrival of ChatGPT has sort of taken the wind out
       | of Marginalia's sails as a human-directed search engine. It seems
       | likely that the future of answering one's questions using the
       | internet, is an LLM giving a straightforward, concise answer that
       | is free of the quirks of a human author. Therefore, there is less
       | motivation to search for personal websites and read website
       | makers' own writing.
       | 
       | For example, imagine an old-school hobbyist website that contains
       | information about some obscure band or author that can't be found
       | elsewhere on the web, and doesn't readily show up in a Google
       | search. Yet at the same time the author writes terrible prose,
       | uses annoying HTML/CSS, or goes into tiresome political rants,
       | etc. Instead of using a Marginalia-like search engine to discover
       | those sites and read them directly, wouldn't it be a superior
       | experience to have an LLM gorge on all those sites and then tell
       | you just the facts that you care about?
        
         | pomstazlesa wrote:
         | No
        
         | codetrotter wrote:
         | The pendulum always swings.
         | 
         | 10, 15 years later and there will be a widespread "rediscovery"
         | of the human made web by those who grew up with LLMs.
        
         | slindsey wrote:
         | Based on my experience with Large Language Models (LLM) so far,
         | I see more value in niche sites that marginalia will help the
         | user find.
         | 
         | An LLM is regurgitating what it reads, true or false. There's a
         | bias for things it sees more often. There's randomness in the
         | response. Sure, humans are error-prone but a problem with known
         | computer responses is that humans tend to think "a computer did
         | this so it must be right." And LLMs are not pumping out true
         | thoughtful answers but simply putting a string of probably
         | words together.
         | 
         | The internet is vast and is increasingly filled with garbage,
         | stolen and duplicated data, and monetization. It's nice to be
         | able to look in the nooks and crannies of the internet for
         | data.
         | 
         | I'm actually more interested in sites that do personal curation
         | of interesting links that are personally vetted and of interest
         | to someone than what's found through google or chatgpt.
        
       ___________________________________________________________________
       (page generated 2023-11-30 23:00 UTC)