[HN Gopher] Show HN: Simply explain 20k concepts using GPT
___________________________________________________________________
Show HN: Simply explain 20k concepts using GPT
Hi HN! I made a tool that autogenerates simple, high-level
explanations of concepts and organizes them in a somewhat
university course-like structure so that it's easier to see how
things are structured. Currently it has about 20,000 concepts on a
range of topics but that's just what I generated so far, it should
work with more obscure topics in the future. I love learning about
random topics where I don't have a good background in like history
or linguistics, but it's hard to figure out what topics there (you
don't know what you don't know) are in certain fields and what they
are even about, so this was a way to get the high level idea about
random things that I wanted to know about. It also only uses the
information in the GPT model at the moment, so obviously
information can't be trusted completely and you should definitely
double check any information you read here by Googling. I'm
thinking of doing the Bing Chat approach for the next version and
adding references, but don't have that yet Hopefully someone else
finds this useful even if it's not perfect!
Author : kirill5pol
Score : 19 points
Date : 2023-03-31 22:24 UTC (35 minutes ago)
(HTM) web link (platoeducation.ai)
(TXT) w3m dump (platoeducation.ai)
| radicaldreamer wrote:
| The amount of content thats going to be generated in the next few
| years is going to absolutely drown anything humanity has created
| thus far. I would not be surprised to see a wholesale return to
| analog/"old" knowledge once the vast majority on the network
| becomes unreliable/generated.
| blakers95 wrote:
| And what happens when the models themselves are trained
| primarily on the data they produced?
| amelius wrote:
| Sounds like you reinvented Wikipedia's "Simple English" pages,
| but in a way that can't be trusted very much.
| jszymborski wrote:
| Big agree. It feels wrong to dissuade someone from building
| something, this is likely an exercise in creativity, but I have
| to say this goes on my "Bad Use of LLMs" list.
|
| Sure, it's possible to "learn" from LLMs in that they might
| spark some idea that you might not have thought of, but taking
| the output from LLMs as a source of knowledge is exactly what
| you shouldn't use an LLM for.
| thih9 wrote:
| Another data point, I recently asked ChatGPT for TV show
| recommendations and got helpful (and not made up) results.
|
| What was the subject and what prompt have you used when asking
| about the books?
|
| (looks like parent comment has been edited; earlier it
| mentioned asking GPT for good book recommendations and getting
| made up results)
| amelius wrote:
| I don't recall, but I remember posting it here and some other
| people tried it as well and noticed the same. Anyway, I
| removed the "book" part of my comment briefly after posting
| because I thought it detracted from the main point of it.
| kirill5pol wrote:
| Yeah, it's definitely not at the level where it can be fully
| trusted yet, but I found GPT (and this) quite helpful for
| learning about something new where I don't have any background
| in. That's also why I think that something like showing the
| source of something might be a good way to improve the trust
| (although also still not perfect...)
| nickvec wrote:
| "A collection of notes on (allegedly) written by Plato himself."
|
| Might want to remove "on" to make the sentence grammatically
| correct.
| kirill5pol wrote:
| Oops thanks for catching that!
___________________________________________________________________
(page generated 2023-03-31 23:00 UTC)