[HN Gopher] Show HN: I made an Ollama summarizer for Firefox
       ___________________________________________________________________
        
       Show HN: I made an Ollama summarizer for Firefox
        
       Source: https://github.com/tcsenpai/spacellama
        
       Author : tcsenpai
       Score  : 120 points
       Date   : 2024-10-11 15:45 UTC (1 days ago)
        
 (HTM) web link (addons.mozilla.org)
 (TXT) w3m dump (addons.mozilla.org)
        
       | RicoElectrico wrote:
       | I've found that for the most part the articles that I want
       | summarized are those which only fit the largest context models
       | such as Claude. Because otherwise I can skim-read the article
       | possibly in reader mode for legibility.
       | 
       | Is llama 2 a good fit considering its small context window?
        
         | tcsenpai wrote:
         | Personally I use llama3.1:8b or mistral-nemo:latest which have
         | a decent contex window (even if it is less than the commercial
         | ones usually). I am working on a token calculator / division of
         | the content method too but is very early
        
           | garyfirestorm wrote:
           | why not llama3.2:3B? it has fairly large context window too
        
             | reissbaker wrote:
             | I assume because the 8B model is smarter than the 3B model;
             | it outperforms it on almost every benchmark:
             | https://huggingface.co/meta-llama/Llama-3.2-3B
             | 
             | If you have the compute, might as well use the better model
             | :)
             | 
             | The 3.2 series wasn't the kind of leap that 3.0 -> 3.1 was
             | in terms of intelligence; it was just:
             | 
             | 1. Meta releasing multimodal vision models for the first
             | time (11B and 90B), and
             | 
             | 2. Meta releasing much smaller models than the 3.1 series
             | (1B and 3B).
        
         | reissbaker wrote:
         | I don't think this is intended for Llama 2? The Llama 3.1 and
         | 3.2 series have very long context windows (128k tokens).
        
         | tempodox wrote:
         | What about using a Modelfile for ollama that tweaks the context
         | window size? I seem to remember parameters for that in the
         | ollama GitHub docs.
        
       | donclark wrote:
       | If we can get this as the default for all the newly posted HN
       | articles please and thank you?
        
         | totallymike wrote:
         | I sincerely hope this never happens
        
         | ukuina wrote:
         | This is why I built https://hackyournews.com
         | 
         | It summarizes via Puter (free).
        
           | iJohnDoe wrote:
           | So cool! Thanks. Bookmarked.
        
       | chx wrote:
       | Help me understand why people are using these.
       | 
       | I presume you want information of some value to you otherwise you
       | wouldn't bother reading an article. Then you feed it to a
       | probabilistic algorithm and so you _can not have_ any idea what
       | the output has to do with the input. Like
       | https://i.imgur.com/n6hFwVv.png you can somewhat decipher what
       | this slop wants to be but what if the summary leaves out or
       | invents or inverts some crucial piece of info?
        
         | andrewmcwatters wrote:
         | People write too much. Get to the point.
        
           | chx wrote:
           | any point? regardless of what's written? does that work for
           | you?
        
             | garyfirestorm wrote:
             | sometimes you don't have time to read an entirety of a
             | large article. You want a quick summary, some people are
             | poor at summarizing things in their head as they go and can
             | get lost in dense text. Extensions like these really help
             | me with headers, structure that I want to follow, quick
             | overview and gives me an idea if I want to deep dive
             | further.
        
               | drdaeman wrote:
               | Sometimes it's not even an article, but a video. And
               | sometimes all you care is just a single tiny fact from
               | that video.
               | 
               | Although I don't think this particular summarizer works
               | for videos. And I don't think Ollama API supports audio
               | ingestion for transcription. There are some summarizers
               | that work with YouTube specifically (using automatic
               | subtitles).
        
               | tcsenpai wrote:
               | Speaking of, I made also a youtube summarizer at
               | https://github.com/tcsenpai/youlama
        
             | 87m78m78m wrote:
             | Why don't you try using these tools yourself so you have an
             | understanding of them? People like to get shit summarized,
             | its really not as deep as you are trying to make it out to
             | be.
        
           | throwup238 wrote:
           | Even if I want to read the entirety of a piece of long form
           | writing I'll often summarize it (with Kagi key points mode)
           | so that I know what the overall points are and can follow the
           | writing better. Too much long form writing is written like
           | some mystery thriller where the writer has to unpack an
           | entire storyline before they'll state their main thesis, so
           | it helps my reading comprehension to know what the point is
           | going in. The personal interest stories that precede the main
           | content always land better that way.
        
           | ranger_danger wrote:
           | I think you just insulted every journalist on Earth.
        
             | Spivak wrote:
             | It's really not that deep. There's writing you read for its
             | aesthetic merits and writing you read for its contents.
             | When you want the latter but the piece is written for the
             | former a summary fixes the mismatch.
        
             | seb1204 wrote:
             | Nowadays a lot of websites are written in a style that goes
             | on and on and on dancing around a topic, adding historical
             | context all in a terrible writing style only to lengthen
             | the text for SEO. In such cases a summary can be a good
             | thing.
        
         | KaiMagnus wrote:
         | At least for me it's less about the individual article, in that
         | case I agree with you, but more about the case where you have
         | 25 articles.
         | 
         | Now you can't possibly get through all of them and have to
         | decide which of those could be worth your time. And in that
         | case, the tradeoff makes sense.
        
         | InsideOutSanta wrote:
         | "Then you feed it to a probabilistic algorithm and so you can
         | not have any idea what the output has to do with the input"
         | 
         | This is theoretically true, but to me at least, practically
         | irrelevant. In all cases, for most values of the word "all",
         | the summary does tell you what the article contains.
         | 
         | For me at least, the usefulness is not that the summary
         | replaces reading the article. Instead, it's a signal telling me
         | whether I should read it in the first place.
        
       | asdev wrote:
       | I built a chrome version of this for summarizing HN comments:
       | https://github.com/built-by-as/FastDigest
        
         | larodi wrote:
         | Thank you been thinking about this for long time while copying
         | lots of conversations back and forth Claude.
        
           | asdev wrote:
           | no problem! hope it works out for you. currently only
           | supports Ollama and OpenAI but should be pretty easily
           | extended to Claude and other APIs
        
       | oneshtein wrote:
       | I use PageAssist with Ollama for two months, but I never called
       | "Summarise" option in menu. :-/
        
         | tcsenpai wrote:
         | TIL, I am experimenting with PageAssist right now
        
       ___________________________________________________________________
       (page generated 2024-10-12 23:01 UTC)