Posts by piccolbo@toot.community
 (DIR) Post #ATcD30LHuSx25qqRyC by piccolbo@toot.community
       2023-03-14T18:55:09Z
       
       0 likes, 0 repeats
       
       @simon The alpaca demo seems to run in 27s independent of the number of tokens I provide, from 10 to 1000. How's that possible? It also failed badly each test I tried (factorization, bio, summarization)
       
 (DIR) Post #ATcEiBvHrFjTpxt1hw by piccolbo@toot.community
       2023-03-14T19:14:44Z
       
       0 likes, 0 repeats
       
       @simon OK, and if I provide like 500 tokens? Why doesn't the time go up?
       
 (DIR) Post #ATcHCurlWZuNp11AYq by piccolbo@toot.community
       2023-03-14T19:12:42Z
       
       0 likes, 0 repeats
       
       @simon I keep getting bad answers from Alpaca demo. For example "The prime factorization of 16 is" "16 = 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1 × 1  ..."
       
 (DIR) Post #ATiRuhzjoCE3Fk4DPU by piccolbo@toot.community
       2023-03-17T19:11:36Z
       
       0 likes, 0 repeats
       
       @simon @monkeyninja That's one side of the problem. The other is that TikTok exerts control on what users see, and several orgs have found evidence of censoring information that is unfriendly to China, see https://en.wikipedia.org/wiki/Censorship_by_TikTok https://www.technologyreview.com/2021/07/13/1028401/tiktok-censorship-mistakes-glitches-apologies-endless-cycle/ https://www.theguardian.com/technology/2019/sep/25/revealed-how-tiktok-censors-videos-that-do-not-please-beijing Complementary to censoring is promoting material that furthers China's goals, like fomenting extremism in the US. This is potentially concerning, but I haven't read of evidence yet.
       
 (DIR) Post #AULhYoJ8jAsgoZxEkC by piccolbo@toot.community
       2023-04-05T17:40:47Z
       
       0 likes, 0 repeats
       
       @simon Can't think of an example when mankind built an artifact without knowing what it's going to be good or bad for. Never heard of "emergent airplanes" or "emergent typewriters". A Frankenstein moment.
       
 (DIR) Post #AUOggi8E3aFBUrvsjw by piccolbo@toot.community
       2023-04-07T04:09:39Z
       
       0 likes, 0 repeats
       
       @simon It looks like it may be possible: https://arxiv.org/pdf/2212.03827.pdf (via Sam Bowman). In short, it's possible to extract a truth prob. out of the internal state of an LM by fitting a model to be consistent wrt negation f(LMstate(A)) = 1 - f(LMstate(not A)) The LM appears to be implicitly estimating the truth prob of a sentence even if it considers false continuations to be more likely. If confirmed, it may allow the unsupervised creation of a truth-labeled training set.
       
 (DIR) Post #AVJliNZjXSJcZ2dfSC by piccolbo@toot.community
       2023-05-04T17:09:59Z
       
       0 likes, 0 repeats
       
       @simon The reading list alone is gold.
       
 (DIR) Post #AW8EIqjHUo4oqK0v3Y by piccolbo@toot.community
       2023-05-29T01:25:57Z
       
       0 likes, 0 repeats
       
       @simon Contrast this with the fact that chatGPT passes the uniform bar exam. What does it mean? 1: you can pass the bar exam and be a terrible lawyer or 2: tests only work in the context they are designed for (people with a law degree trying to become lawyers) or 3: other?
       
 (DIR) Post #AW9XcivMJkoKiLoU5I by piccolbo@toot.community
       2023-05-29T16:37:20Z
       
       0 likes, 0 repeats
       
       @simon presumably it focuses more on the law (any lawyers out there who've taken it?). So why doesn't chatGPT make up laws with the same ease as it does precedent?