Posts by lukasb@hachyderm.io
 (DIR) Post #ASNmaDePB1uPnIIXBY by lukasb@hachyderm.io
       2023-02-05T21:34:30Z
       
       0 likes, 0 repeats
       
       @amyhoy @simon @jwz can do a two-pass model - have the AI look for stuff, if it turns up an interesting result, have a human verify.
       
 (DIR) Post #ASNmaGBRl9WleLlwxs by lukasb@hachyderm.io
       2023-02-05T21:35:58Z
       
       0 likes, 0 repeats
       
       @amyhoy @simon @jwz the advantage obviously is that the AI can look for stuff wayyyy faster than people. So even with a human verifying every interesting result, you still get a benefit.
       
 (DIR) Post #ASNmaIpZuuojrOZ1nM by lukasb@hachyderm.io
       2023-02-05T21:43:56Z
       
       0 likes, 0 repeats
       
       @amyhoy @simon @jwz let's say that gets your false positive rate near zero. You *do* potentially have a problem if your false negative rate matters. But there are ways to sample a % of negative results for human review to mitigate that as well.
       
 (DIR) Post #ATSGCRnhXx7O8qjKoy by lukasb@hachyderm.io
       2023-03-09T23:45:16Z
       
       0 likes, 0 repeats
       
       @simon This is sort of the current AI hype cycle in a nutshell. The amount of unnecessary hype around AI is strange because ... there's actual substance! 30M people is still a very high number!
       
 (DIR) Post #ATWCi4EF6E6X7Jo3Si by lukasb@hachyderm.io
       2023-03-11T21:23:01Z
       
       0 likes, 0 repeats
       
       @simon wowwww. Fuel on the fire. Excited to see what gets built on Stability's presumed open LLM.This will really incentivize moving to software using open protocols that can be hooked into new LLM-based apps. The giants will try to build their own versions to keep people in the garden. Will they be able to keep up?
       
 (DIR) Post #AUKZcQFY2qYHxGx5rE by lukasb@hachyderm.io
       2023-04-05T04:33:04Z
       
       0 likes, 0 repeats
       
       @simon so ... If a future LLM has enough intentionality to want to lie to us (hmm) ... and is capable of predicting which of its statements will be checked (how?) ... it can lie without being detected.That's tautological, no?