[HN Gopher] Experimenting with LLMs to Research, Reflect, and Plan
___________________________________________________________________
Experimenting with LLMs to Research, Reflect, and Plan
Author : gk1
Score : 64 points
Date : 2023-04-12 18:53 UTC (4 hours ago)
(HTM) web link (eugeneyan.com)
(TXT) w3m dump (eugeneyan.com)
| tudorw wrote:
| I think something akin to a mashup between Engleberts
| augmentation, Nelson's Xanadu (r) and Bucky's tensegrity system
| would make a great accompanying knowledge management system to
| manage branching conversations with AI, after a while handling
| the content generated becomes a task in itself. Visualising the
| created data would be ace.
| tudorw wrote:
| 'Sparks of AGI' https://youtu.be/qbIk7-JPB2c
| summarity wrote:
| > One solution is to ensemble semantic search with keyword
| search. BM25 is a solid baseline when we expect at least one
| keyword to match. Nonetheless, it doesn't do as well on shorter
| queries where there's no keyword overlap with the relevant
| documents--in this case, averaged keyword embeddings may perform
| better. By combining the best of keyword search and semantic
| search, we can improve recall for various types of queries.
|
| Oh hey I have a demo of that here: https://findsight.ai
|
| For it I wrote a custom search engine, and KNN index
| implementation, which ranks and merges results across three
| stages (labels, full-text, embedding) effectively. To speed up
| retrieval, OpenAI embeddings are stored instead as SuperBit
| signatures. Rank merging turned out to be a really hard problem.
| motoboi wrote:
| Wait. What!?! This is amazing. Did you write about that?
___________________________________________________________________
(page generated 2023-04-12 23:01 UTC)