Post ATuJKSa3ApArHim86C by hayo@infosec.exchange
 (DIR) More posts by hayo@infosec.exchange
 (DIR) Post #ATuIFvyYM6lt3BVx68 by ceperez@sigmoid.social
       2023-03-23T12:22:05Z
       
       0 likes, 0 repeats
       
       Many AI experts are fools for failing to recognize that GPT-4 has an understanding that exceeds most human capabilities.  Instead, they would rather focus on the knowledge gaps (i.e., human hands) and not its surprising capabilities.
       
 (DIR) Post #ATuJ6LzerokSnl1j28 by ceperez@sigmoid.social
       2023-03-23T12:31:33Z
       
       0 likes, 0 repeats
       
       We should not confuse errors due to a lack of knowledge with the folksy notion of the lack of common sense.  The errors we recognize in diffusion and transformer models are simply manifestations of gaps in knowledge. ---RT @IntuitMachine1/n When you play around long enough with StableDiffusion, you recognize the gaps in knowledge of the models. As an example, a person's face can only be rendered correctly when it is framed in a con…https://twitter.com/IntuitMachine/status/1638478685777936385
       
 (DIR) Post #ATuJ6XTq9wQYQYEU6a by ceperez@sigmoid.social
       2023-03-23T12:31:34Z
       
       0 likes, 0 repeats
       
       It is an absurdity to claim a gap in understanding without having a good definition of "understanding".  What does it mean to have a gap of something when one has a non-existent or perhaps impoverished definition of understanding.
       
 (DIR) Post #ATuJ6hT91NoRPHz4PQ by ceperez@sigmoid.social
       2023-03-23T12:31:37Z
       
       0 likes, 0 repeats
       
       What is *my* definition of understanding?   It's defined here (with the help of GPT-4) to conjure up more words than necessary:  ---RT @IntuitMachineDoes anyone want to critique my definition of understanding?  I had GPT-4 generate it, so it doesn't have any obvious grammatical errors! https://twitter.com/IntuitMachine/status/1638495219548078081
       
 (DIR) Post #ATuJKSa3ApArHim86C by hayo@infosec.exchange
       2023-03-23T12:34:05Z
       
       0 likes, 0 repeats
       
       @ceperez uh... no. GPT4 is a language model with a big database where it matches search queries. It understands nothing.
       
 (DIR) Post #ATuLDIlBUGL2hzhJAm by ceperez@sigmoid.social
       2023-03-23T12:55:13Z
       
       0 likes, 0 repeats
       
       Why is it that humans need metaphors, analogies, and circumlocutionary text to define elusive concepts? This is because we lack a few metaphors to capture these.  Understanding is the aggregation of as many perspectives as possible.
       
 (DIR) Post #ATuLDY6IS38gHogsfA by ceperez@sigmoid.social
       2023-03-23T12:55:13Z
       
       0 likes, 0 repeats
       
       When a neural network is exposed to a training set with a diverse set of images and language, it is thus clear that it has it own unique kind of understanding under my proposed definition above. It is a kind that is like curve fitting rather https://medium.com/intuitionmachine/ptolemy-and-the-limits-of-deep-learning-4c74dbb008e7… https://twitter.com/i/web/status/1638881992946208768
       
 (DIR) Post #ATuLDeIZPKWPVcTlcu by ceperez@sigmoid.social
       2023-03-23T12:55:15Z
       
       0 likes, 0 repeats
       
       What does the "deep" in deep learning mean?  It means that there are multiple layers in its architecture.  These layers make possible curve fitting at multiple scales.  The reason why DL is different from human minds is that the reference frames in each layer are not like human… https://twitter.com/i/web/status/1638883049650749440
       
 (DIR) Post #ATuLDesjEtRfJlOdzE by ceperez@sigmoid.social
       2023-03-23T12:55:16Z
       
       0 likes, 0 repeats
       
       But a LLaMa (i.e. LLM), by virtue of its artificial fluency, is able to discover reference frames that humans are familiar with. These are our commonsense metaphors and analogies. https://medium.com/p/ae162014b4db
       
 (DIR) Post #ATuMWRcyqE6lfRYIMa by ceperez@sigmoid.social
       2023-03-23T13:09:53Z
       
       0 likes, 0 repeats
       
       But it does not perform this mapping by default.  There is no evolutionary pressure to do so. Humans by their limited context windows are forced to always employ abstractions in their thinking. We automatically abstract (i.e., chunk) our thoughts. https://medium.com/intuitionmachine/the-now-or-never-nature-of-human-thinking-d70fd01cb222
       
 (DIR) Post #ATuMWYNziLHQczavTc by ceperez@sigmoid.social
       2023-03-23T13:09:54Z
       
       0 likes, 0 repeats
       
       The divergence between intuitive minds like DL and humans can be seen from the difference in reference frames (or representations). It's not an issue of understanding but rather an issue of having a different reference frame. In fact, this is an intuitive human concept.
       
 (DIR) Post #ATuu2ulTLLEFmvcB2u by Ryder@sigmoid.social
       2023-03-23T19:25:31Z
       
       0 likes, 0 repeats
       
       @ceperez Here is a critique of your definition of #Understanding .https://desystemize.substack.com/p/representation-and-uncertainty#unplugai