Posts by m8ta@fediscience.org
 (DIR) Post #ASraPfUE6aYAMPQcWe by m8ta@fediscience.org
       2023-02-20T07:04:49Z
       
       0 likes, 0 repeats
       
       @simon 🤔 Got into a heated discussion with someone who thought that LLMs should be free to tell people what they want to hear - !!!  Does freedom of speech extend to ever larger organizations of capital? E.g. corporations, bots? I hold that at some scale an organization is compelled to disseminate the truth (if this dominates the probability dist) or a distribution of opinions (if otherwise). LLMs are statistical machines, afterall. This *could* break some nasty feedback loops.
       
 (DIR) Post #AXBrQUa3QH2IY6GKEi by m8ta@fediscience.org
       2023-06-29T17:20:36Z
       
       0 likes, 0 repeats
       
       @simon Did not know about the tokenize and ast modules -- very useful!
       
 (DIR) Post #AXnLeaMTZ7srba9APo by m8ta@fediscience.org
       2023-07-17T19:25:32Z
       
       0 likes, 1 repeats
       
       Remarkable: LLama-Adapter is about as strong as GPT-4 chain-of-thought on the Science QA test set, following only ~ 1 hour of fine-tuning. The model works by concatenating ~1M trainable (but default zero-gated) tokens to the upper layers of Llama, then fine-tuning on the 52k instructions from Alpaca. (Science QA = Multi-modal elementary and high school curricula)https://arxiv.org/abs/2303.16199
       
 (DIR) Post #AY5zjs2joejjy5ZZrc by m8ta@fediscience.org
       2023-07-26T19:17:31Z
       
       0 likes, 0 repeats
       
       @simon @allafarce @apenwarr Yes!! It would be really amazing if taxpayers / constituents were able to do linguistic backprop on government, e.g. by asking a series of "why" questions.  (provided the answers are not hallucinated..)