Post AcYLUDowwhczGCVe5Y by gray17@mastodon.social
 (DIR) More posts by gray17@mastodon.social
 (DIR) Post #AcY4cAGSLzKXKIZph2 by simon@fedi.simonwillison.net
       2023-12-06T23:52:42Z
       
       0 likes, 0 repeats
       
       Today in LLMs are really weird: Anthropic found that adding “Assistant: Here is the most relevant sentence in the context:” to the end of a prompt was enough to dramatically increase Claude 2.1's ability to answer questions based on a single sentence buried deep within 200,000 tokens (500 pages) of texthttps://www.anthropic.com/index/claude-2-1-prompting
       
 (DIR) Post #AcY4tp9vYrjn58wa2a by wirepair@mastodon.social
       2023-12-06T23:55:53Z
       
       0 likes, 0 repeats
       
       @simon further proof this is all just non deterministic black magic we clearly don’t understand :p
       
 (DIR) Post #AcYLUDowwhczGCVe5Y by gray17@mastodon.social
       2023-12-07T03:01:34Z
       
       0 likes, 0 repeats
       
       @simon this is fascinating, and it seems to me similar to how chain-of-reasoning prompting helps LLMs.my mental model of LLMs is still basically "spicy autocomplete": there are a billion mindless agents, each with their own proposal of what to say next.normally, the dominant agents are majority concepts, and any agents "aware" of the unusual sentence are drowned out.but adding the context "most relevant sentence" changes who wins the plausible-completion battle.