Posts by matthewmaybe@sigmoid.social
 (DIR) Post #ATdsx4bgR0huvyMt72 by matthewmaybe@sigmoid.social
       2023-03-15T14:23:09Z
       
       0 likes, 0 repeats
       
       @TedUnderwood I am just a guy on the Internet, but the way around this would be if combinations and/or juxtapositions of lessons from existing training data performed unreasonably well on whatever the target task or metric is, and were consequently rewarded in RLHF loops.
       
 (DIR) Post #ATgerYEmXXdLD60b3o by matthewmaybe@sigmoid.social
       2023-03-16T22:29:25Z
       
       0 likes, 0 repeats
       
       @TedUnderwood IDK, I actually find this phase rather boring. I enjoyed the trippy funhouse mirror version of generative AI and the indie rock vibe of being the first to know about a cool little fine-tune someone trained. Now I feel like, since it's not feasible to compete with the big firms on raw capabilities, maybe now is the time for peeps like me to actively work on the vaguely mystical  and distinctly unprofitable task of giving quirky bots using lesser LLMs something resembling sentience
       
 (DIR) Post #AUIElPHu6G8VFzG4yu by matthewmaybe@sigmoid.social
       2023-04-04T01:35:58Z
       
       0 likes, 0 repeats
       
       @garymarcus as a random observer on the Internet, I have to agree, despite possibly having boosted some of those arguments against the open letter. l also tried to sign the letter but was told that I couldn't because I am unimportant.  Perhaps this focus on experts and their views is part of the problem--because, after all, experts sort of build their careers on attracting attention by arguing with other experts... maybe it's time instead to start building a mass movement around these issues?
       
 (DIR) Post #Ac32JAmzxHCTdxrBh2 by matthewmaybe@sigmoid.social
       2023-11-22T00:32:20Z
       
       0 likes, 0 repeats
       
       @Wolven 👀 https://sigmoid.social/@lumanai/111449472856294350
       
 (DIR) Post #AdeDmATaLT0r2CpWUK by matthewmaybe@sigmoid.social
       2024-01-08T20:57:11Z
       
       0 likes, 1 repeats
       
       @Wolven there are two noteworthy experiments with training LLMs on copyright-free data: Bigcode/Huggingface Starcoder and Microsoft's Phi-1.5, both of which have yielded such surprisingly good results that it has changed how people are thinking about data quality vs. data quantity. as such I'm not sure there is even a technical basis for OpenAI's claim anymore.
       
 (DIR) Post #AhmzoSHWFqfF7dyN72 by matthewmaybe@sigmoid.social
       2024-05-11T18:01:34Z
       
       0 likes, 0 repeats
       
       @Wolven bookmarked for later watching!
       
 (DIR) Post #Aj5okaXsOWhFS5t4u8 by matthewmaybe@sigmoid.social
       2024-06-19T17:48:51Z
       
       0 likes, 0 repeats
       
       @timmarchman @dmehro @clive @WIRED oh funny; as a side project, I actually built a 100% copyright-safe AI chatbot that responds to queries based on Wikipedia searches--what the "if ChatGPT and Wikipedia had a kid" should've been, basically. Haven't had time to progress beyond proof of concept but happy to demo/discuss with anyone interested