[HN Gopher] Show HN: Chrome extension to summarize blogs and art...
___________________________________________________________________
Show HN: Chrome extension to summarize blogs and articles using
ChatGPT
Author : your_challenger
Score : 79 points
Date : 2022-12-05 18:05 UTC (4 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| Glench wrote:
| If you want to monetize it, you could look into
| https://extensionpay.com
| nathell wrote:
| Related: Autosummarized HN (using GPT-3, not ChatGPT)
| https://danieljanus.pl/autosummarized-hn
| kornork wrote:
| Great idea!
|
| I changed the prompt to this: "Rewrite this for brevity, in
| outline form:"
|
| I prefer the responses this way, rather than the 3rd person book
| report style the other prompt returns.
| genewitch wrote:
| anyone else remember Copernic Summarizer? I miss that. When are
| we getting a self-hosted "GPT-Alike"? Is it something that
| "federated search engine" project from a few years ago could aid
| with training?
| rcconf wrote:
| Sorry if this is off-topic, but ChatGPT is blowing my mind, I'm
| using it to write my Christmas cards this year and it's already
| made some funny ones.
|
| Dear <Manager>
|
| Wishing you a very Merry Christmas and a Happy New Year! May your
| days be filled with joy, laughter, and lots of eggnog. Speaking
| of eggnog, have you heard the one about the manager who tried to
| manage a team of developers? He kept telling them to "commit" to
| their work, but they just kept "pushing" him aside.
|
| Cheers, <Developer>
| c7DJTLrn wrote:
| I've asked it to clean up my code including Makefiles and
| stuff. A lot of it is way cleaner and higher quality. Maybe
| that says more about me than ChatGPT.
|
| This is scaring the shit out of me.
| worldsayshi wrote:
| Fascinating. Do you just feed it the contents of a file and
| ask it to clean it?
| lolive wrote:
| I ask it/him/her about heroes of the Forgotten Realms (from
| Dungeons and Dragons). And it/he/she is pretty aware of the
| lore. [much more than I am!]
| drivers99 wrote:
| Well, if it doesn't know, it'll just make something up.
| elashri wrote:
| Sounds great, now will try to make ChatGPT convert the code to
| work with Firefox.
| your_challenger wrote:
| Awesome! Raise a PR and I'll merge
| satvikpendem wrote:
| https://extensionworkshop.com/documentation/develop/porting-.
| ..
|
| Should be fairly straightforward, take a look.
| your_challenger wrote:
| I've cross posted on twitter [1] with a video
|
| [1] https://twitter.com/clamentjohn/status/1599827373008244736
| Imnimo wrote:
| How do you decide whether the article too long to fit in
| ChatGPT's context window?
| your_challenger wrote:
| Good question. I tested it manually with a few articles I could
| find. If you find a web page too large for ChatGPT then let me
| know, I can split it into multiple batches and ask ChatGPT to
| summarize once I'm done.
| gamegoblin wrote:
| ChatGPT's context window is 8192 tokens. A token is about 3-4
| characters. OpenAI has an open source tokenizer you can
| download, too, if you want the exact number of tokens a body of
| text is.
| posterboy wrote:
| Jokes on you, I don't need summaries or articles to upvote a
| headline.
|
| Do you do double entenndres though?
| beepbooptheory wrote:
| Why does this use case make sense for ChatGPT instead of just
| vanilla gpt3?
| popinman322 wrote:
| IIRC, ChatGPT is based on GPT3.5 (likely an even larger model)
| rather than GPT3. It's also been refined a bit using
| reinforcement learning.
|
| I've noticed that when I ask ChatGPT to determine the type of a
| variable in a given code block, its reasoning has fewer holes
| than GPT3 for the same prompt. Stands to reason that other
| results will be similarly refined.
|
| It also doesn't appear to have a token limit? Not sure how that
| feat was accomplished.
| jamager wrote:
| If it works well, it could be actually very useful for longer
| pieces.
|
| I have read so many books with some actually good ideas hammered
| for +200 pages, (just to justify the cost of printing, satisfy
| the industry standards, or whatever)
|
| A half decent summary of all those would be of actual value. Get
| 80% of the value in 20% (or less) of the time.
| onlyrealcuzzo wrote:
| Isn't this available for most books already?
|
| I think the problem is - there's examples and anecdotes and
| whatever scattered throughout the book that make those ideas
| connect for you.
|
| And this is different for everyone.
|
| Maybe an ML you train yourself on highlights would be able to
| find the stuff that will connect for you - but I'm skeptical
| enough people read & highlight enough to train ML models to do
| this (or if it would even work).
| jamager wrote:
| Yes there are a number of services, and I would happily pay
| for them. But either they have a very small catalog, or their
| summaries are too short, or both.
| jerrygoyal wrote:
| doesn't blinkist do that
| petters wrote:
| Would be difficult since GPT-3 uses a low number of tokens
| compared to a book. Around 8000 I think?
|
| Could possibly be done by iteratively summarizing section by
| section but that would give suboptimal results.
| your_challenger wrote:
| I could expand the product to handle such large content. Can
| you link me to a large content like that?
| jamager wrote:
| All copyrighted material, I don't have links to the actual
| works, but some books that come to mind that I would have
| enjoyed much more if their were 20% of pages:
|
| - The courage to be disliked (Ichiro Kishimi)
|
| - The simple path to wealth (J.L. Collins)
|
| - Peak (K Anders Ericsson)
|
| - Happiness (Matthieu Ricard)
|
| - Clean Code (Robert C Martin)
| geoelectric wrote:
| Not that they'd be the particular books OP wants, but if
| you're looking to summarize large content, perhaps grab it
| from Project Gutenberg? https://www.gutenberg.org/
| t_a_v_i_s wrote:
| How are you calling the API?
| roey2009 wrote:
| https://openai.com/api/
|
| Enjoy.
| [deleted]
| amrrs wrote:
| I don't think ChatGPT is available via APIs. Most unofficial
| APIs are headless browser
| your_challenger wrote:
| This guy [1] (on twitter) says they are using Davinci 003,
| and claiming it is what ChatGPT uses.
|
| [1]
| https://twitter.com/VarunMayya/status/1599736091946659845
| obert wrote:
| ChatGPT is in fact just a chat prompt on top of Davinci3,
| plus a markdown renderer
| stevenhuang wrote:
| Do we know what the prompt is?
| jcims wrote:
| The posts of comments to chatgpt reference davinci-002
| georgehill wrote:
| I think they are using the same api as ChatGPT https://gith
| ub.com/clmnin/summarize.site/blob/0e4da39fa4355a...
|
| Is this even legal?
| your_challenger wrote:
| The only thing here is instead of copy-pasting it into
| ChatGPT, you get to use a browser extension that does the
| job. Pretty convenient actually.
|
| So technically, we are still using
| https://chat.openai.com/chat, with the UI
| [deleted]
| bertman wrote:
| It's right there in the code.
|
| https://github.com/clmnin/summarize.site/blob/0e4da39fa4355a.
| ..
|
| POST request with access token from the browser's cache after
| the user has logged in with their OpenAI account.
| your_challenger wrote:
| Yup. You need to SSE it though. Pretty simple actually (I'm
| OP, btw)
| eulers_secret wrote:
| Looking over this, I cannot help but ask: "how much of this
| codebase could be generated by ChatGPT"?
| tluyben2 wrote:
| I was playing around [0] with gpt and and most of what I
| started is by chatgpt, however with many fixes. The code it
| generated looked mostly ok, however it needed many fixes as it
| was off a lot, especially on api use, promises etc. Because
| this is all throw-away by definition (and only for localhost
| use!) and only to see/test/play with the differences with
| production stuff, it is pretty impressive how fast you can do
| things.
|
| I have the feeling though that copilot makes less mistakes and
| learns my style better; chatgpt keeps mixing styles even in the
| same session. You can prime the prompt and then it works a bit
| better in that case, I found.
|
| [0] https://github.com/tluyben/chatgpt-sqlite-server
___________________________________________________________________
(page generated 2022-12-05 23:01 UTC)